Abstract: How can we determine what the brain computes? Here we describe a framework to infer canonical structure in this computation. It is based on the theory that the brain uses nonlinear computation by redundant, recurrently connected population codes to perform approximate probabilistic inference. Specically, these computations implement a message-passing algorithm operating on a probabilistic graphical model whose interactions are encoded by overlapping probabilistic population codes. We describe an analysis method that aims to identify this algorithm from complex neural data elicited during perceptual inference tasks. To recover the message-passing algorithm from neural recordings, we must simultaneously find (1) the representation of task-relevant variables, (2) interactions between the decoded variables that de ne the brain's internal model of the world, and (3) the global parameters that define the message-passing inference algorithm. We hypothesize that the global parameters are canonical - that is, common to all parts of the graphical model regardless of interaction strength - so that they generalize to new graphical models. We have applied this analysis method to arti cial neural recordings generated by a simple model brain that uses an advanced mean eld method to perform approximate inference. We formulate a method of learning the inference algorithm from the given neural dynamics as an optimization problem, and successfully recover our model inference algorithm. This success encourages us to apply these methods to more sophisticated brain models, trained neural networks, and, eventually, to large-scale neural recordings to uncover canonical properties of the brain's distributed nonlinear inferential computations.