Jump to ratings and reviews
Rate this book

Models of the Mind: How Physics, Engineering and Mathematics Have Shaped Our Understanding of the Brain

Rate this book
The brain is made up of 85 billion neurons, which are connected by over 100 trillion synapses. For over a century, a diverse array of researchers have been trying to find a language that can be used to capture the essence of what these neurons do and how they communicate – and how those communications create thoughts, perceptions and actions. The language they were looking for was mathematics, and we would not be able to understand the brain as we do today without it.

In Models of the Mind, author and computational neuroscientist Grace Lindsay explains how mathematical models have allowed scientists to understand and describe many of the brain's processes, including decision-making, sensory processing, quantifying memory, and more. She introduces readers to the most important concepts in modern neuroscience, and highlights the tensions that arise when bringing the abstract world of mathematical modelling into contact with the messy details of biology.

Each chapter focuses on mathematical tools that have been applied in a particular area of neuroscience, progressing from the simplest building block of the brain – the individual neuron – through to circuits of interacting neurons, whole brain areas and even the behaviours that brains command. Throughout Grace will look at the history of the field, starting with experiments done on neurons in frog legs at the turn of the twentieth century and building to the large models of artificial neural networks that form the basis of modern artificial intelligence. She demonstrates the value of describing the machinery of neuroscience using the elegant language of mathematics, and reveals in full the remarkable fruits of this endeavour.

400 pages, Hardcover

Published May 4, 2021

Loading interface...
Loading interface...

About the author

Grace Lindsay

1 book50 followers
Grace Lindsay is a computational neuroscientist currently living in London. She completed her PhD at the Center for Theoretical Neuroscience at Columbia University, where her research focused on building mathematical models of how the brain controls its own sensory processing. Before that, she earned a Bachelor's degree in Neuroscience from the University of Pittsburgh and received a research fellowship to study at the Bernstein Center for Computational Neuroscience in Freiburg, Germany. She was awarded a Google PhD Fellowship in Computational Neuroscience in 2016, and has spoken at several international conferences. She is also the producer and co-host of Unsupervised Thinking, a podcast covering topics in neuroscience and artificial intelligence.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
249 (54%)
4 stars
151 (32%)
3 stars
55 (11%)
2 stars
6 (1%)
1 star
0 (0%)
Displaying 1 - 30 of 63 reviews
Profile Image for Brian Clegg.
Author 212 books2,842 followers
April 5, 2021
This is a remarkable book. When Ernest Rutherford made his infamous remark about science being either physics or stamp collecting, it was, of course, an exaggeration. Yet it was based on a point - biology in particular was primarily about collecting information on what happened rather than explaining at a fundamental level why it happened. This book shows how biologists, in collaboration with physicists, mathematicians and computer scientists, have moved on the science of the brain to model some of its underlying mechanisms.

Grace Lindsay is careful to emphasise the very real difference between physical and biological problems. Most systems studied by physics are a lot simpler than biological systems, making it easier to make effective mathematical and computational models. But despite this, huge progress has been made drawing on tools and techniques developed for physics and computing to get a better picture of the mechanisms of the brain.

In the book we see this from two directions - it's primarily about modelling the brain's processes and structures, but we also see how the field of artificial intelligence has learned a lot from what we know of the way the brain works (and doesn't work very well) in developing the latest generation of AI systems. Lindsay shows how we have come to get a better understanding of the mechanisms of neutrons, memory formation, sight, decision making and more, looking at both the detailed level of neurons and larger scale structure. Many of the chapters take us on entertaining diversions related to the history of the development of these ideas. When I mentioned the book to someone who works in neurology, the response was that most computational neurology books they'd come across contained a barrage of equations - Lindsay does this with hardly an equation in the text (the only one I remember is Bayes theorem), though there are a few in an appendix for those who like their content a bit crunchier.

The only real criticism I have is that it could have done with some paring back. The book felt a bit too long, too many people were name-checked, and too many bits of brain functionality were covered. I also wouldn't have finished the book with a 'grand unified theories of the brain' chapter, which had too much of an overview feel and threw in concepts like consciousness that require whole books in their own right - it would have been better if the last chapter had pulled things together and looked forward to the next developments. However, this remains an excellent introduction to an area that few of us probably know anything about, and all the more fascinating because of that.
Profile Image for Ali.
17 reviews3 followers
June 25, 2022
Story of collaboration of neuroscience and artificial intelligence.
I concluded that the brain got many specialized modules that were made by natural selection specifically for our survival in our environment.
Maybe there is no general intelligence algorithm, instead a very specialized structure, for survival that has intelligence as a byproduct of all those modules.
When you zoom in on details of the brain's mechanisms, you see that there are specific structures for different tasks like motion detection, categorization, movement coordination, several layers of visual processing, and specific neurons for audio source tracing.
Current neural networks architectures are so simple compared to the brains mess, and extracting a general intelligence model out of it seems impossible.
Profile Image for Nguyễn Nghị.
5 reviews5 followers
August 3, 2021
The author provides high-level descriptions of almost every family of models one would find in an introductory course in computational neuroscience, and she does so in such a clear and clean writing style. Accompanying each model is a vivid historical account of how it came about and, in a broader sense, how the enterprise of theoretical modeling has changed through time in the way it is perceived and practiced. Highly recommend this to anyone curious about the inquiries and methods of computational neuroscience or the neural sciences in general.
Profile Image for Lourens.
99 reviews1 follower
December 28, 2021
Accessible and easy to read introduction to the applications of a wide array of mathematical models in neuroscience. I had a relatively easy time with the mathematical concepts as most of them I've seen before in some form, but Lindsay does a great job giving intuitive explanations anyway.

Especially surprised to learn that the convolutional neural network (CNN), a dominant method in computer image recognition, was inspired in concept by the workings of our visual cortex. Whenever I heard people say that neural networks are modelled after how the brain works, I always assumed it was only the very high-level idea. The concept of the CNN was later modified and scaled up to be more useful for many computer applications. I learned from this book, these modified neural networks had started to represent the visual cortex even more. That is a fascinating discovery.

A wide array of different mathematical tools are covered in this book. Personally I enjoyed the mentions of graph theory/network science and information theory.

For those curious about how intuition about the brain's working can be translated into exact, quantifiable terms, this is a great book. There is much ambiguity around what goes on in our heads, and math definitely does not get rid of all of it (yet). But it is reassuring to see the progress that can be made this way.
48 reviews3 followers
May 11, 2021
Though it's elegantly written and impressive in its breadth, this book didn't quite completely work for me. It attempts to do many things at once and ends up doing all of them passably. In a nutshell, it is a history of the various ways in which mathematics are used in modeling different aspects of brain physiology and functionality. If this sounds extremely general, it's because it is.

To structure the discussion, the author associates each neuroscience (or psychology) problem with one mathematical (or physical) idea. Thus spike production is described by differential equations. Computing, memory forming are discussed using algebra. Eyesight brings convolutions. With movement we get some matrix theory. Neural coding (of course) provides an opportunity to introduce information theory. Mapping structure to function is paired with graph theory. The discussion of rational decision making introduces Bayesian probabilities (Kudos here to the author for -correctly- attributing "Bayes' rule" to its actual author Laplace). A final chapter is dedicated to attempts at representing the entire brain using an analog of (Gibbs) free energy.

It's a neat concept, but only leaves room for the most basic treatment of each topic, and so many interesting developments are just hinted at, or left out altogether.

The price of this flitting about is that no single idea is discussed in any depth, and also there is no personal view given by the author of how all these approaches might add up to a science. In the last chapter she appears to timidly hitch her wagon to Friston's "free energy" theories, but the problem here is that no-one seems to see a clear path to practical, useful results using this approach.
Profile Image for William Ngiam.
8 reviews
July 4, 2023
This was an enlightening read, especially for an early-career cognitive neuroscientist like myself!

Grace Lindsay provides a very accessible and clear overview of our current understanding of the brain, detailing the history of research and insight that has informed the different models of various aspects of the brain and mind. Lindsay masterfully does not rely on mathematical equations to explain the models as science textbooks might, but rather carefully crafted examples and expositions. I also appreciated the emphasis on the role of mathematics and computing in the discoveries, and the overall impression that an integrative approach across various domains – psychology, neurobiology, physics, and others – is needed to make scientific progress on something as complex as the brain.

With quick anecdotes to describe the scientists behind the advances, I found this book to be entertaining and easy-to-read – I found myself wanting to continue on to see what was in the next chapter. I found the book a little on the lighter side in detail of the various models and ideas, though likely because I was familiar with many of the topics presented in the book. That being said, I definitely picked up some new knowledge and a new appreciation for various models that will inspire my own research!
1 review1 follower
January 4, 2023
A good introduction to computational neuroscience. With few equations for such a math-heavy subject, it may be superficial for people with some background in the field, but it is clearly written and great for beginners to understand the concepts and intuition behind various models
118 reviews3 followers
July 8, 2021
A popular science guide to the role of physics, maths and engineering in our understanding of the brain.

I work in this field, so its kind of weird seeing things you've read about as research papers appearing in a popular science guide, like seeing the slow morphing of cutting edge research that proves to be correct into established knowledge as it happens.

It's a good book, clear and I definitely learnt stuff from it. Perfect book to explain to outsiders what the field does, and goes into some detail about the technical ideas. A lot of the technical ideas are mathematical and there are very few equations in the main body of the book, so there are long paragraphs that say in words what if most easily expressed in maths, but I guess that is how it must be; hopefully it will serve as an appetiser for some people to learn even more, I know I would have been super intrigued by this book as a physics undergrad.
Profile Image for Graeme Newell.
271 reviews87 followers
May 7, 2023
I was not a fan of this book.

First of all, let me just say that the subject matter is fascinating. It’s a cool topic, how physics, engineering, and mathematics have shaped our understanding of the brain.

The writing in this book was dry as a bone. Lindsay's writing is so dense and technical that it made my brain hurt just trying to follow along.

The book was quite repetitive. It also used lackluster examples. Sure, Lindsay talks about how physicists and engineers have contributed to our understanding of the brain, but where are the concrete examples? Where are the stories of scientists who made groundbreaking discoveries? Where are the case studies of patients who benefited from these discoveries?

Overall, I would not recommend "Models of the Mind" to anyone who is not a die-hard neuroscience fan. If you're looking for an engaging, accessible read on the subject, you're better off looking elsewhere.
Profile Image for Rory Fox.
Author 1 book25 followers
June 22, 2021
Mathematically modelling the brain (not the mind), the book does exactly what it promises. It provides around a dozen different mathematical ways of looking at specific brain activities and it links each example to the (neuro) biology of the processes involved, as well as descriptions of the historical scientists who made the discoveries.

The book combines scientific insights with detailed mathematical information, although it keeps the formulas out of the text in an appendix at the end. The book also provides asides about the historical figures it mentions. For example, we hear that Claude Shannon at Bell Laboratories used to do his thinking riding a bike around the lab whilst juggling (47%). Depending on readers preferences, they will find examples like this as either enlivening, or distracting.

Personally I enjoyed the detailed scientific descriptions and I particularly appreciated how the author showed the developments taking place, step by step.

For example, in chapter 3 we hear about the 1950s US Navy Computer, Perceptron. It was designed to mimic the binary nature of how brain neurons fire on and off, to give a truth/falsity model. But it was unable to differentiate two ons (trues) from two offs (falses) which was a major problem. So a multi-layered neural network approach was designed to use backpropagation to resolve the problem.

This solution worked, but it was no longer ‘modelling’ what neuroscientists thought was occurring in the brain. It was suggesting an alternative way in which the brain could work, and so it led to new investigations to see whether the brain was in fact working in that way.

The book is 400 pages, so it contains a lot of information. Even so I was occasionally left wanting more information about some threads in chapters.

For example, chapter 10 focuses upon how the brain can be modelled using probabilistic logic. One of the problems with this kind of logic is that the outputs depend very heavily upon what is assumed as a background normality (ie the Prior probability). At the end of the chapter the author asks whether these background probabilities are inherited or learned. She cites an experiment involving chickens who were subject from hatching, to light sources from below (ie contrary to an above model of sunlight). This seems to have surprised the chickens, thus suggesting elements of inherited expectations.

Similar experiments with human babies have shown surprise when toys defy laws of gravity. I was interested to know more about the extent to which basic knowledge could be inherited, rather than learned, but the chapter closed and we were onto the next topic.

Overall this is a very detailed and informative book which readers interested in Maths and Neurobiology will particularly enjoy.

This is an honest review of an Advance Review Copy of the text.
Profile Image for Georgii Korshunov.
3 reviews5 followers
June 20, 2023
This is a pretty easy read, with lots of stories and analogies. It’s both a story of how humanity gradually understood the workings of the brain hardware, and an explanation of how this hardware works on various levels. Each chapter contains a brief history of scientists involved, how they approached their problem, sometimes with a few anecdotes — I like that the name “dynamic programming” was deliberately chosen to not scare government officials with math talk, even though it’s basically a math concept first. Every chapter introduces a few math concepts, but they are all explained with great analogies (and there are formulas in a separate chapter for math-heads).

It covers many questions of our wet-ware from how neurons use electricity to pass signals, how calculations can be made by a network, how memories are stored, perception, movement, decisions and rewards. And even though many models are introduced, including those discovered pretty recently, an honest answer to many of those questions is still “we don’t know”, which makes this topic so interesting to me
Profile Image for Audrey Phan.
53 reviews3 followers
February 20, 2023
this was REALLY well written and the author did an amazing job of explaining really complex topics in a way that I could (somewhat) understand!! definitely a great read for people interested in computational/cognitive/human neuroscience. it was at times a bit dense so I had to read slowly, kind of 1 chapter at a time.
Profile Image for Gustavo Juantorena.
27 reviews2 followers
May 9, 2021
Al momento de poner la calificación siempre me parece necesario compararlo con libros de temática similar y en este caso es claro que existe muy poco. La Neurociencia Computacional es una sub-disciplina de la Neurociencia que no para de crecer y sin embargo no posee la misma popularidad que otras. Me parece que el trabajo de divulgación de Grace Lindsay es muy bueno, logrando explicar con suficiente sencillez temas muchas veces áridos y poco abordados en la literatura de divulgación "neuro" más clásica. Recomendado para cualquier persona interesada en el funcionamiento del sistema nervioso y la inteligencia artificial.
Profile Image for Dragana.
8 reviews
August 25, 2021
Every page was a pleasure! An amazing synthesis of the current state of the field and how we got here.

The author draws well elaborated connections across methods and theories we use today in neuroscience, interweaved with the stories of the scientific discoveries/steps and the researchers behind them.
Profile Image for Mishehu.
528 reviews26 followers
January 15, 2022
First-rate work of popular science: packed with fascinating detail and hugely engaging. This is an author to watch.
9 reviews2 followers
Read
March 16, 2024
Took a lot of time to reach the end, but it was worth it! The way history, mathematics and different levels of neuroscience were stiched together... Wouldn't strongly urge you to read!
Profile Image for Ben Zimmerman.
138 reviews7 followers
September 2, 2022
In Models of the Mind, Grace Lindsey maps out the major mathematical models that have driven neuroscience through a narrative history. Although each chapter exists almost as a standalone narrative essay, common themes include the bi-directional impact of fields of neuroscience and computer science, the importance of understanding the history of an idea while building on top of it, and the fruitful impact of borrowing tools and ideas from other disciplines.
I thought it was also instructive to pay attention to what components of "modeling the mind" were not carried through the chapters - for instance, neurons are basically left out of the chapters of modeling rational behavior. This was interesting to me, and I was wondering that once we knew enough about the brain, whether we would model complex behavioral outcomes with neurons? The book also stimulates lots of thinking about when mathematical modeling is appropriate and when it isn't necessary and where the transition is between modeling components of the brain to modeling components of the mind. What does that difference mean?

Each chapter focuses on a particular mathematical tool that has been applied to some area of neuroscience. The first chapter introduces the utility of using mathematical models to think clearly about problems in neuroscience by offering a precise definitions and a rigorous system for manipulating abstract variables that can lead to insight. She also introduces the main objections to using mathematics in biology, which is that the complexity in biology makes it difficult to meaningfully utilize mathematical models. She outlines that the pitfalls of applying mathematics to the real world are oversimplification and an appeal to aesthetics, which are common in pure mathematics and physics.

Chapter 2 discusses the history leading up to the Hodgkin and Huxley model for action potentials, which goes into the history of modeling electricity in circuits, and then using those principles to model the electrical properties of neurons.

Chapter 3 discusses the early links between thinking about the brain and computation. I particularly loved this chapter because it emphasizes the direction of the history of thought in a unique way. In current times, it's common to think about the computer as a metaphor for the brain, and this metaphor is often criticized. But early on, the whole idea of the modern computer came from trying to emulate what the brain did. This chapter goes through the history of how McCulloch and Pitts worked together to imagine ways that neurons might instantiate logic, and how John von Neumann built off of that work to imagine building a computer. The chapter also discusses Dr. Frank Rosenblatt's Perceptron, which represents the first attempts at building artificial intelligence into a computer, and the first attempt of putting McCulloch-Pitts networks to test. Interestingly, this first instance made clear that learning in an artificial neural network would not define clear logic gates like McColluch and Pitts imagined.

Chapter 4 was about using Hopfield networks and attractors to model memory. The chapter begins by discussing concepts of the engram or memory trace in the brain and debates about where and how memories were stored between people like Hebb and Lashley. Hopfield, a physicist, contributed a mathematical model of neurons that could implement content-addressable memory - where some of the memory could be retrieved from just a part of it. It is a type of recurrent network that has certain states, called attractors, that other patterns of activity will naturally evolve towards. The chapter goes on towards a deeper discussion of memory, the evolution of Hopfield networks, and direct experimental evidence for certain kinds of networks.

Chapter 5 was about oscillations through excitation and inhibition. Oscillations work as noise reducers to synchronize neuronal firing. But oscillations require reciprocal inhibition, so this chapter goes a bit into the history of discovering inhibitory neurons. The chapter also discusses the need for balance between excitation and inhibition and investigating the limits and behaviors or large circuits with excitatory and inhibitory cells by using computation models built on the Hodgkin-Huxley models. The chapter also discusses the history of chaos theory, and how concepts of chaos theory have helped to understand how the brain maintains balance through chaos.

Chapter 6 discusses the application of convolution to neural networks to solve computer vision problems. I also really liked learning from this chapter how direct the connections between computer science and neuroscience were. The first convolutional neural networks were basically made to replicate Hubel and Wiesel's findings of the hierarchical structure of early visual pathways - to the extent that Kunihiko Fukushima, who invented the first convolutional neural networks, expressed frustration that Hubel and Wiesel had not published more information about the architecture further into the brain. This chapter also goes into the history of using convolutions for template matching in general, which gave some insight into where the idea came from to use convolution in neural networks, which I've personally always wondered about. This architecture was fairly disregarded for a while for problems in computer vision until quite recently when it was found that once the convolutional neural network was large enough it worked really well. That happened around 2012, and I was in an neuroengineering training program in my PhD at the time, and I remember how exciting it was that year, and the subsequent years when computer vision started outperforming human vision for classification tasks for the first time. In general, Lindsey does a great job making you feel like part of a great chain of human thought, starting hundreds of years ago and leading up right to today. I think especially as an academic, I live for this feeling and love books that produce such a vivid connection to past thinkers.

Chapter 7 is about the application of information theory to brains and really about attempts to quantify information, since that is the thing that the brain processes. This chapter had a nice discussion on how far you could actually use mathematical principles to learn about biological systems, since biology has a lot of constraints that may make it less than an ideal information processor.

Chapter 8 was about the motor cortex and debates about what the motor cortex actually does at its most fundamental level - does it send signals to particular muscles or does it fundamentally encode whole patterns of movements like reaching. The chapter goes into a history of research on the motor cortex and then the application of kinetics to modeling how neural firing translates into force on joints, and later kinematics to model directions of movements. The chapter also talks about studying populations of neurons and then using dimensionality reduction to understand what the population activity encodes.

Chapter 9 discusses applications of graph theory to structural networks by thinking about brain regions as nodes in a network and their connections as edges. These graph theoretical metrics have been used especially in humans at larger scales as biomarkers of certain psychiatric or medical disorders.

Chapter 10 discusses rational decision making through the lens of probability and Bayes' rule. In this chapter, I really liked the discussion of the debate between using Bayesian probability or not, mainly because using Bayesian probability requires that you have some explicit knowledge about what is called a "prior", and there are lots of arguments in science about how you get your knowledge about that prior for lots of applications.

Chapter 11 discusses reinforcement learning and computational strategies for sequential learning. This chapter goes into the history of reinforcement learning in behavioral psychology and takes us through the modeling evolution. First, behaviorists were mostly concerned with behavioral outcomes and rewards, but it didn't deal well with steps in sequential planning that didn't deal with rewards themselves. This led to the concept of the value function of a particular state, which is defined recursively as the reward + the value of the next state. This kind of modeling allows reinforcement learning to be good at learning longer tasks (like winning a level of a video game), where the reward doesn't come until after many actions.

Chapter 12 is the final chapter and glosses over some of the "grand unified theories of the brain," many of which I've encountered before in other pop science books. This includes the Karl Friston's free energy principle, which is a principle that the brain tries to minimize differences between the brain's predictions about the world and the actual information it receives, which is not easily falsifiable as a theory, but may be a useful guide for thinking. The chapter also touches on the Thousand Brains Theory by Jeff Hawkins and Tononi's Integrated Information Theory for explaining consciousness.

Overall, this was one of the best neuroscience pop science books that I've read. Scientifically, it is mature and diplomatic, and I think did a really fair job of describing important debates. This is refreshing to me when it feels like the expectation is more and more to take sides and commit to a camp. The book is fascinating and well-written, which is even more impressive since it was written by a relatively junior, neuroscientist, rather than a senior scientist or a professional writer. I read it as part of the neuroscience book club that I co-lead, and about half of the participants felt like there was too much math and experimental details and the other half felt like there weren't enough, so I think the balance of depth was just about perfect. I believe that Grace Lindsey successfully landed her first tenure-track faculty position the same year that I fumbled through my own unanimous rejections, and it was both intimidating and awe-inspiring to think that she represents the level that strong academic institutions are hiring new faculty at. If I had any criticism, it is maybe that the chapters feel fairly separate and not strung together solidly enough. This led me to feel like I wanted to learn a lot more about all of the sub-topics introduced in the chapters, and it sort of leaves you with a sense of wanting instead of deep satisfaction. But being left wishing a book was longer and deeper is far from the worst that a book can strive for.
Profile Image for Yuriria.
10 reviews
February 7, 2023
Amazing and inspiring book! A must read to any human interested in neuroscience and a modelling approach to biological problems. I just loved it. Each chapter is a joyful learning experience :))
Profile Image for Jaime Rios.
5 reviews
December 11, 2022
Absolutely wonderful. I feel like i learned about cool ass shit. The book was so easy to follow considering how complicated the subject matter can get. It really highlights how far we've come and how far we still need to go in regards to a field i think the lay person know very little about.
18 reviews
October 29, 2023
An excellent primer on computational neuroscience that takes a historical perspective to build the reader's understanding of the field. This book is both approachable for lay folk and informative for those in adjacent fields.

Highly recommend for anyone interested in computational neuro who is looking for a jumping off point.
5 reviews
August 15, 2021
This is one of the most enjoyable books I've read yet. A very nicely written overview of computational or theoretical neuroscience. As an outsider fan of the field, I felt the topics were very well presented. I particularly enjoyed the bits of human history behind the science. Not just for adding personality to the development of the field and making it more relatable, but there's also a lot to be learned from these stories

One thing I would have liked to see is if the author weaved in slightly longer reviews of the relevant previous chapters as the book progressed. There is a broad array of topics and many, many names, which is inevitable when you credit/cite people fairly. I read slowly and my memory is not great so I kept having to either go back to previous chapters or just move on every once in a while.

Finally, I appreciated ending the book with the topic of unifying theories and touching on consciousness.
June 20, 2023
For someone without a strong (or any) background in neuroscience this book has provided what feels like a fundamental and easy to understand overview of an otherwise mathematical intensive field. It also covers different models used to describe the brain, all of which without requiring the reader to understand any equations!!

However, some of the chapters did feel a bit like a history lesson, which from a "we are a developing scientific field" standpoint I get, but I still deduct half a star for the amount of outdated/outright incorrect ideas I had to go through... 4.5 stars:)
Profile Image for Asim.
1 review5 followers
March 27, 2021
If you are part of an underrepresented computational/theoretical group in a Neuroscience institute, forget giving talks to get people interested, simply buy a couple of copies of this book and hide them in random places for people to find. I am pretty sure plenty of students will 'magically' start walking into your lab and asking questions!

I believe this book fills a very important gap in mainstream (read: general public friendly) neuroscience books. It nicely surveys the neuroscientific landscape from the perspective of quantitative approaches (Physics, Maths, Engineering) and personalities involved who shaped it. The book also contains an appendix for the mentioned mathematics for the more inclined reader. As an early career scientist who is dipping his feet into more theoretical approaches, I certainly found this volume both informative and inspirational. Definitely would recommend!
3 reviews
March 13, 2021
Well written narrative that brings significant individuals to life and clearly explains their contributions, based on a solid foundation of scientific research. Would appeal to those with a science background and the general reader.
17 reviews
December 8, 2022
Amazing book — clearly written and did not gloss over technical details.
Profile Image for Ayman.
300 reviews2 followers
July 23, 2023
Models of the mind by Grace Lindsay

Since discovering cardiovascular circulation, scientists and doctors discovered the functionality and mechanisms of every organ in the body, but they struggled how a blob of soft tissue filling the skull, and weighing 2% of the body mass, but consuming 20% of its energy gives us thought, feeling, and makes us who we are. Some gave up and declared, while the body is a machine of muscles and organs, the brain is where the soul resides and operates under a different set of rules than those of the natural world. The first breakthrough in understanding the brain came by chance when a scientist observed that a frog’s muscle twitched when its nerves met an electric current. It became clear that the brain does have a natural mechanism. It is an electric mechanism but natural, nonetheless.

The study of the brain remained a biological science until early computer science pioneers including Jon von Neumann in 1950s built the “Perceptron” - an artificial electric system that they hoped would mimic the neural structure of the brain to perform cognitive functions. The artificial intelligence era was born and since then the boundaries between biology, mathematics, physics, computing, and philosophy all but collapsed as scientists from different disciplines collaborated to understand how the mind comes out of the brain, sometimes by discovering brain structures, mechanisms, and functions and other times by discovering mathematical and computational models that explain them.

Neuroscientists discovered and modeled several of the brain’s different tricks including:

1- How the brain processes visual and sensory information
2- How the brain controls bodily motion
3- How the brain forms and retrieves memories
4- How the brain learns and makes decisions

The huge advancements we see today in robotics and AI are the results of these discoveries.
The book concludes with two of the key remaining open questions in this field:

1- Is there a General Unifying Theory – GUT of the brain that unites all these different systems and mechanisms the way physical theories unite electromagnetic and nuclear forces? While Physicists believe in a GUT that unifies physics “the theory of everything,” neuroscientists are divided into GUT believers and GUT deniers. The deniers’ argument is that biology does not need unified theories and millions of years of evolution are enough for complex systems to emerge by amalgamating diverse simpler structures and mechanisms the way multicellularity emerged from single cell organisms.

2- What is consciousness and where is it in the brain? An interesting attempt in this field is a mathematical definition of consciousness that is a function of the level of interactions between a system’s components. The human brain loses consciousness during sleep, for example, because its activity level quiets down below a certain threshold. In that sense, a thermostat that continuously measures temperature and regulates AC units has a level of consciousness and potentially, we could build machines that are even MORE conscious than humans.

This book is a hidden treasure.
Profile Image for Thoriq Fauzan.
28 reviews
December 26, 2022
I can take three things from here. On one side, there's this sense of pride and wonder that humanity has figured SO MUCH of the underlying principles of the mechanics of our world. From the predominantly superstitious societies to the swich to rationality of the scientific methods not so long ago. From the believe that the mind is of elusive essence "imponderable" as described by a 19th century philosophy to us mere 200 years later actively advancing the field of neuroscience and even drawing inspirations from it for the next gen AI. We've gone a long way in a relatively short time and certainly will continue doing so.

On the other hand, there's actually very little we know about the brain compared to what there is to know. How does vision, or movement, or memory, or anything that we take for granted as our daily life experience work is still in active research and not yet completely understood. Will we ever get to comprehend the mechanics of our own brain? We can only continue trying, as they say our brain is probably the most complicated object in the universe.

Third, even if we will never have a full comprehension of our own brain (can a computer understand how its own CPU work?), approaching such goal will still reap us great benefits. The field of neuroscience has borrowed ideas from mathematics, physics, and many other disciplines and has certainly contributed back. Even now, AI is ubiquitous and will never come to be without scientists of the past's curiosity of their brain.
Profile Image for Peter.
753 reviews60 followers
January 25, 2024
A truly fascinating approach to explaining how we've come to understand the brain. Obviously, there's still a long way to go, but this book highlighted just how far we've come. So in terms of premise and content, this would get all the stars. It was surprisingly comprehensive, going quite deep into the mathematics and biological weeds, and wasn't afraid to take its time explaining relatively complex concepts. So for someone already working in neuroscience, this is a great overview of the history and frameworks within the field, and I would be surprised this being assigned to mathematics, biology, computer science, and medical university courses.

For the rest of us, this is a hard one to recommend.

Even for someone like me who's casually interested in neuroscience, been exposed to many of the concepts before, and could follow along with most of the explanations, this was a slog. It also didn't help that I did this as an audiobook, which while the narrator was brilliant, is simply not meant to be consumed solely audibly. After the first few lapses in concentration and necessary rewinding, I had to accept that there wasn't any point. The details were simply too granular to be remembered without intentional study.

This isn't meant for the general public with an interest in the topic, so in terms of my enjoyment, 2 stars are as many as I can give. Otherwise, it's a great book for its narrow target market.
Displaying 1 - 30 of 63 reviews

Can't find what you're looking for?

Get help and learn more about the design.