hits counter The Alignment Problem: Machine Learning and Human Values - Ebook PDF Online
Hot Best Seller

The Alignment Problem: Machine Learning and Human Values

Availability: Ready to download

A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today’s "machine-learning" systems, trained by data, are so effective that we’ve invited them to see and hear for us?and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning a A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today’s "machine-learning" systems, trained by data, are so effective that we’ve invited them to see and hear for us?and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole?and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands. The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called “artificial intelligence.” They are steadily replacing both human judgment and explicitly programmed software. In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they—and we—succeed or fail in solving the alignment problem will be a defining human story. The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture—and finds a story by turns harrowing and hopeful.


Compare

A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today’s "machine-learning" systems, trained by data, are so effective that we’ve invited them to see and hear for us?and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning a A jaw-dropping exploration of everything that goes wrong when we build AI systems and the movement to fix them. Today’s "machine-learning" systems, trained by data, are so effective that we’ve invited them to see and hear for us?and to make decisions on our behalf. But alarm bells are ringing. Recent years have seen an eruption of concern as the field of machine learning advances. When the systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole?and appear to assess Black and White defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And as autonomous vehicles share our streets, we are increasingly putting our lives in their hands. The mathematical and computational models driving these changes range in complexity from something that can fit on a spreadsheet to a complex system that might credibly be called “artificial intelligence.” They are steadily replacing both human judgment and explicitly programmed software. In best-selling author Brian Christian’s riveting account, we meet the alignment problem’s “first-responders,” and learn their ambitious plan to solve it before our hands are completely off the wheel. In a masterful blend of history and on-the ground reporting, Christian traces the explosive growth in the field of machine learning and surveys its current, sprawling frontier. Readers encounter a discipline finding its legs amid exhilarating and sometimes terrifying progress. Whether they—and we—succeed or fail in solving the alignment problem will be a defining human story. The Alignment Problem offers an unflinching reckoning with humanity’s biases and blind spots, our own unstated assumptions and often contradictory goals. A dazzlingly interdisciplinary work, it takes a hard look not only at our technology but at our culture—and finds a story by turns harrowing and hopeful.

30 review for The Alignment Problem: Machine Learning and Human Values

  1. 4 out of 5

    Tariq Mahmood

    My AI's perception as a superior technology which should be embraced unquestionably almost reverentially was successfully challenged after going through the numerous examples in this book. By the end of the book, I was convinced that AI is better and will get even more efficient as compared to human ingenuity, but needs to be constantly tested for questioned, any AI system depends upon the quality of the training data and the type of algorithms employed to solve any problem. My AI's perception as a superior technology which should be embraced unquestionably almost reverentially was successfully challenged after going through the numerous examples in this book. By the end of the book, I was convinced that AI is better and will get even more efficient as compared to human ingenuity, but needs to be constantly tested for questioned, any AI system depends upon the quality of the training data and the type of algorithms employed to solve any problem.

  2. 5 out of 5

    Karl Robert

    Brilliant reading that covers numerous aspects concerning learning and teaching of both humans and programs, a bit of practical ethics and filosofy all woven together under one topic that is the development of machine learning programs. It demonstrates perfectly how in order to teach you must first understand the subject and how you learn more as you teach it to someone. If you have any interest in AI, its safety and real ethical problems or the history of how machine learning has developed hand Brilliant reading that covers numerous aspects concerning learning and teaching of both humans and programs, a bit of practical ethics and filosofy all woven together under one topic that is the development of machine learning programs. It demonstrates perfectly how in order to teach you must first understand the subject and how you learn more as you teach it to someone. If you have any interest in AI, its safety and real ethical problems or the history of how machine learning has developed hand in hand with psychology, computer science, social sciences and neurology, this book is well worth a read.

  3. 4 out of 5

    Max

    Really nice introduction to AI & the alignment problem - Christian gives a great overview over some bigger trends in ML (e.g. curiosity, imitation learning, transparency) and the history of AI, often connecting it to insights from cognitive science, which really enriched the book, speaking as a human and cognitive scientist. I wonder what more refined thinkers on the future of AI think of the book*, but I found that it connects nicely to many of the looming challenges with building AI systems th Really nice introduction to AI & the alignment problem - Christian gives a great overview over some bigger trends in ML (e.g. curiosity, imitation learning, transparency) and the history of AI, often connecting it to insights from cognitive science, which really enriched the book, speaking as a human and cognitive scientist. I wonder what more refined thinkers on the future of AI think of the book*, but I found that it connects nicely to many of the looming challenges with building AI systems that are robust and whose workings will be appropriately aligned with human values. Even though similar in style and purpose, I found that it has little overlap with the recent The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World and Human Compatible: Artificial Intelligence and the Problem of Control. I expect this triple to contribute a lot to introducing more smart cookies to face this formidable challenge and heaving AI's longer-term developments to many agendas as a Serious Issue. So here's to hoping that the ongoing AI revolution will be less of a naively hopeful leap than I'm afraid it will be. *Rohin Shah from the Alignment Newsletter [liked it a lot](https://www.lesswrong.com/posts/gYfgW...)

  4. 5 out of 5

    Dylan Matthews

    If you’re plugged into the artificial intelligence world, you’ll immediately recognize the title. The “alignment problem” in AI is ensuring that artificial agents’ goals align with the goals of humans. That’s not an easy problem to solve, as Christian details through countless examples. The “reward function” for AI programs is often misspecified. Early in the book Christian tells the story of AI researcher Dario Amodei, who in 2016 was working on a general-purpose AI to play computer games and ha If you’re plugged into the artificial intelligence world, you’ll immediately recognize the title. The “alignment problem” in AI is ensuring that artificial agents’ goals align with the goals of humans. That’s not an easy problem to solve, as Christian details through countless examples. The “reward function” for AI programs is often misspecified. Early in the book Christian tells the story of AI researcher Dario Amodei, who in 2016 was working on a general-purpose AI to play computer games and had gotten stuck on a boat race. Instead of trying to win the race, the AI was instead spinning the boat around in circles, forever. The problem turned out to be simple. The AI was optimized to maximize in-game "points" rather than directly trying to win; the researchers thought points were a decent approximation but instead the AI had found a part of the water where it could get power-ups forever, and just stayed there rather than trying to race. The hardest part is that humans are not very good at articulating the reward function we want for our AI agents. We leave out important information — like “we actually want this boat to finish the race” — all the time. Some of the most interesting parts of the book have nothing to do with alignment, per se, but instead chronicle the dramatic progress that deep learning, reinforcement learning, imitation learning, and other methods have made at improving AI performance — and the surprising parallels we’ve found between how they work and how the human brain works. The book keeps identifying moments where artificial neural networks are uncannily good at predicting how the literal neural network of the brain works — there’s a whole section on dopamine that’s particularly revealing. As someone who identifies as an effective altruist and who has many EA friends (like my colleague Kelsey Piper) who count AI risk as one of the causes they care most about, I found the book incredibly useful as a crib sheet to get more up to date on what they’re talking about. It’s light on equations and heavy on clear examples. If I were to recommend one book to lay people to convince them to care more about the safety of the intelligent machines humans are building, it would be The Alignment Problem. My only complaint is that the field moves fast enough that I could use regular Christian-y updates that de-mystify the latest developments.

  5. 4 out of 5

    KC

    A very well-written book on AI alignment that is more focused on recent research efforts and practical algorithms rather than higher-level philosophical ideas like "Life 3.0" or "Superintelligence." I thought the coverage of reinforcement learning was especially detailed but accessible. I would recommend this to anyone who is curious about current AI research. It fills a similar niche to Stuart Russell's "Human Compatible," but I would recommend this book over that one by a significant margin. ( A very well-written book on AI alignment that is more focused on recent research efforts and practical algorithms rather than higher-level philosophical ideas like "Life 3.0" or "Superintelligence." I thought the coverage of reinforcement learning was especially detailed but accessible. I would recommend this to anyone who is curious about current AI research. It fills a similar niche to Stuart Russell's "Human Compatible," but I would recommend this book over that one by a significant margin. (The content might become outdated relatively quickly though.)

  6. 5 out of 5

    Neel Nanda

    This is an EXCELLENT book about one of the most important problems of our times. I was already fairly familiar with the alignment problem and the technical side of things, but I still got a lot out of it, especially in the earlier sections about the history of AI and of reinforcement learning. I also really liked the deeper links he drew between reinforcement learning, and how we make decisions. This book had the rare delight of being half about unfamiliar topics, and half about topics I knew wel This is an EXCELLENT book about one of the most important problems of our times. I was already fairly familiar with the alignment problem and the technical side of things, but I still got a lot out of it, especially in the earlier sections about the history of AI and of reinforcement learning. I also really liked the deeper links he drew between reinforcement learning, and how we make decisions. This book had the rare delight of being half about unfamiliar topics, and half about topics I knew well, yet doing justice to the topics I knew well. Christian has a gift for simplifying complex topics, using good examples, breaking things down intuitively, but keeping true to the core of the idea. He peppers the book with insights from personal interviews with people relevant to the story, and fills a page with names of technical reviewers of the book, and this clearly shows in the general accuracy and quality This is now one of my go to books for people who want to understand the alignment problem, the historical context, and some paths to potential solutions.

  7. 5 out of 5

    Largar

    A terrific book on what must be the most important question in the field of artificial intelligence – how do we ensure they don’t inadvertently evolve into a technologically dystopian status. Surprisingly, the key issues are as much philosophical and practical as they are technical. The good news is there are a lot of talented and dedicated people working on AI safety and ethics, many of whom we ‘meet’ in the course of reading the book. Engaging, very readable, and highly informative! Strongly r A terrific book on what must be the most important question in the field of artificial intelligence – how do we ensure they don’t inadvertently evolve into a technologically dystopian status. Surprisingly, the key issues are as much philosophical and practical as they are technical. The good news is there are a lot of talented and dedicated people working on AI safety and ethics, many of whom we ‘meet’ in the course of reading the book. Engaging, very readable, and highly informative! Strongly recommend.

  8. 5 out of 5

    Rose Linke

    This fascinating book looks deeply at both the history of machine learning and the urgent challenges around ethics and safety that are being faced by those on the front lines. Well-researched and well-written, this book is approachable for people without a technical background—you will learn a lot!—while still being thought-provoking for those in the field. Highly recommended!

  9. 4 out of 5

    Matthew Emery

    An excellent overview of the current state of ethical AI. The explanations are clear and the topic is fascinating. The last half or so feels much more theoretical than the first. I can only imagine how difficult it would be to deploy a cooperative inverse reinforcement learning in practice.

  10. 4 out of 5

    Terralynn Forsyth

    I read this book as part of the Creative Destruction Lab (CDL) Reading group based in Toronto. The book was an excellent deep dive into important AI ethics questions and issues, with each chapter dedicated to a particular technical problem such as representation, fairness, transparency, etc. The book was written from conversations with the leading scholars working on these issues, as well as an enjoyable historical overview of contributing disciplines aiding today's solutions. These included psy I read this book as part of the Creative Destruction Lab (CDL) Reading group based in Toronto. The book was an excellent deep dive into important AI ethics questions and issues, with each chapter dedicated to a particular technical problem such as representation, fairness, transparency, etc. The book was written from conversations with the leading scholars working on these issues, as well as an enjoyable historical overview of contributing disciplines aiding today's solutions. These included psychology, neuroscience, and cognitive science. The book was a perfect balance of ease/enjoyability of read and deep information on the topic of AI ethics with tasteful reflections on humanity itself. I'd recommend any of Brian Christian's books for those interested in the wider field of AI and its effects on society.

  11. 4 out of 5

    Joel

    I enjoyed the book as one who had used and taught neural networks, but had not done major research in the area. The authors gives lots of examples that are easily understood without needing mathematical details. The book addresses the use of ML in language interpretation, where the learning process picks up the current (or past) biases of the culture, e.g., the impricaton that a doctor is male or nurse is female. The use of AI in hiring produces its own biases, since it is usually racist and sex I enjoyed the book as one who had used and taught neural networks, but had not done major research in the area. The authors gives lots of examples that are easily understood without needing mathematical details. The book addresses the use of ML in language interpretation, where the learning process picks up the current (or past) biases of the culture, e.g., the impricaton that a doctor is male or nurse is female. The use of AI in hiring produces its own biases, since it is usually racist and sexist. The author points out the simply eliminating the race or gender of an applicant does not work since 1) there are other correlated items that will be detected and 2) if the race/gender is known, then the bias may be addressed, e.g., accounting for productivity lags for pregnant women. I thought the author did a good job of relating studies from human development and learning to methodologies of ML, e.g., imitation learning, reinforcement learning, rewards and motivation for learning. The author gives good examples of what happens from using inappropriate rewards and costs. Often this results from not being able to adequately define the goal in mathematical terms, On the issue of fairness, which I have a continued interest in because of the US history of racism, he points out the problem of defining what is fair. The major example is using ML for predicting recidivism in paroles. There are some obvious cases where whites have a distinct advantage over blacks/browns. The problem of weighting false negatives vs false positives is discussed: is it better to release someone who will commit a crime or retain someone who could return to a productive life? ONe of the papers I sent addresses how this is impossible to be "fair" if the ensembles of whites and blacks have different distributions for recidivism. I think knowing the problems presented in the book would have helped me motivate the methods and warn about the likely errors produced by ML systems

  12. 5 out of 5

    Sambasivan

    One of the best books on AI and ML I have ever read. Asks some fundamental questions. This is ground breaking work and will become a classic.

  13. 4 out of 5

    Klaus-Michael Lux

    An excellent read! Brian Christian is a great writer, deftly synthesizing different voices and research efforts into an overarching narrative about a simple question at the heart of much of current pondering into AI: How can we make sure our creations respect our values? This "alignment problem" becomes more and more urgent as artificial agents are being rolled out to real-world scenarios. Making sure to avoid the clichés of the genre (no trolley problem!), Christian covers fairness, representat An excellent read! Brian Christian is a great writer, deftly synthesizing different voices and research efforts into an overarching narrative about a simple question at the heart of much of current pondering into AI: How can we make sure our creations respect our values? This "alignment problem" becomes more and more urgent as artificial agents are being rolled out to real-world scenarios. Making sure to avoid the clichés of the genre (no trolley problem!), Christian covers fairness, representation and transparency, providing a great and very accessible introduction to recent research for example on bias in word embeddings and the impossibility of aligning different mathematical definitions of fairness. He always manages to get great, readable quotes out of his many conversation partners from real-world research, e.g. Moritz Hardt of the fairness definition fame. The second half of the book covers reinforcement learning in great detail and similarly manages to stay clear of the known and boring. Christian introduces the reader to a number of different efforts in the field, aimed at going beyond the mere maximization of hand-crafted reward functions, for example intrinsic motivation and imitation. I've not before seen the topic explained in such an accessible and concise way and I learnt a great deal along the way! In sum, heartily recommended to everyone working in AI and also to everyone who isn't and still wants to get up to speed regarding the alignment of human values and machines.

  14. 5 out of 5

    Jacob Mainwaring

    Really interesting and important read. I thought it was all going to be more around things like fairness and transparency in machine learning. The book did cover this but talked a lot more about the future of AI, reinforcement learning, and questions we must consider to make sure artificial intelligence is working towards the same goals as those of humans’ (this is the alignment problem). He did a good job of explaining reinforcement learning, as well as variants like inverse reinforcement learn Really interesting and important read. I thought it was all going to be more around things like fairness and transparency in machine learning. The book did cover this but talked a lot more about the future of AI, reinforcement learning, and questions we must consider to make sure artificial intelligence is working towards the same goals as those of humans’ (this is the alignment problem). He did a good job of explaining reinforcement learning, as well as variants like inverse reinforcement learning. Tackling the alignment problem will require a wide range of disciplines, from computer science and engineering to philosophy and ethics. Hopefully we apply enough effort and attention towards the safe, fair deployment of AI as we do to general-purpose deployment. The quote that probably best summarizes the book is the following: “we had better be quite sure that the purpose put into the machine is the purpose which we really desire and not merely a colorful imitation of it”. It’s a surprisingly difficult but important problem, and I’m glad it’s starting to get more attention, both from researchers and the general public.

  15. 5 out of 5

    Robert Irish

    Brian Christian takes us carefully and thoughtfully through the labyrinth of artificial intelligence (AI) and ethics. The book starts with fairly comprehensible issues in the field like transparency and salience--two features that suggest we should be able to understand what AI is up to, and see that it is looking at the right things--to the nuances of machine learning well beyond the human capacity to comprehend. The ethical questions--of "alignment" with human values--is explored thoughtfully Brian Christian takes us carefully and thoughtfully through the labyrinth of artificial intelligence (AI) and ethics. The book starts with fairly comprehensible issues in the field like transparency and salience--two features that suggest we should be able to understand what AI is up to, and see that it is looking at the right things--to the nuances of machine learning well beyond the human capacity to comprehend. The ethical questions--of "alignment" with human values--is explored thoughtfully and without exaggeration or hyperbole (unlike say, Our Final Invention: Artificial Intelligence and the End of the Human Era). The care that Christian takes throughout makes some of his conclusions all the more alarming or open for concern. The AI Safety and ethics thinkers he tells us about suggest we need to be cautious about how AI develops because we are long past the point where we can 'just pull the plug'.

  16. 4 out of 5

    Ben

    Absolutely one of the best books I’ve read this year. Thought provoking throughout. Great overview of ML which taught me a lot, even though I thought I was already reasonably familiar. Seriously, if you’re reading this, read this book.

  17. 5 out of 5

    Andrei Khrapavitski

    The book I chose to read during this holidays season was The Alignment Problem: Machine Learning and Human Values, by Brian Christian. Released in October, 2020, the book offers a detailed examination of efforts to overcome the shortcomings of the current systems. I was positively surprised that this book was much less of a foreboding tale of the distant future, rather an attentive exploration of the field’s challenges. Part I of the book deals with the issues familiar to every AI practitioner: The book I chose to read during this holidays season was The Alignment Problem: Machine Learning and Human Values, by Brian Christian. Released in October, 2020, the book offers a detailed examination of efforts to overcome the shortcomings of the current systems. I was positively surprised that this book was much less of a foreboding tale of the distant future, rather an attentive exploration of the field’s challenges. Part I of the book deals with the issues familiar to every AI practitioner: the quality of predictions, model bias, dealing with datasets, training sets, etc. The second part was especially good with a not too technical but still a valuable overview of reinforcement learning, including the groundbreaking work of Andrew Barto and Richard Sutton. By the way, if you are interested in a more “computer sciency” look at RL, I highly recommend Barto and Sutton’s magnum opus Reinforcement Learning: An Introduction. Part III plunges us into the realm of AI safety research, as we tour some of the best ideas currently going for how to align complex autonomous systems with norms and values too subtle or elaborate to specify directly. This is also the most philosophical part of the book where you’ll read about moral uncertainty, utilitarianism, effective altruism, etc. All this is to show how hard it would be to align a more complex system with us humans. Even when we are not talking about AGI, but more and more so when we’re building more complex systems, such agents are going to need good models of us to make sense of how the world works and of what they ought and ought not to do. It is extremely challenging for them and for us as it turns out. As the author predicts, alignment will be messy. Ideally it would be a learning/teaching process where both us and them could learn from one another. No doomsday conclusions in this book but a solid text aimed at anyone interested in the subject.

  18. 5 out of 5

    Ken

    This book is fantastic. Apart from 3b1b and other youtube videos, I haven't studied machine learning before. But this provided a LOT of insight into how to train a machine, potential issues, and possible solutions. The flow is almost perfect, and Brian Christian goes into a perfect amount of detail in describing these issues. There were tons of footnotes and sources but I only read about 5% of them. I love how he makes sure he mentions all the people involved in the research-- a pattern I noticed This book is fantastic. Apart from 3b1b and other youtube videos, I haven't studied machine learning before. But this provided a LOT of insight into how to train a machine, potential issues, and possible solutions. The flow is almost perfect, and Brian Christian goes into a perfect amount of detail in describing these issues. There were tons of footnotes and sources but I only read about 5% of them. I love how he makes sure he mentions all the people involved in the research-- a pattern I noticed in each chapter was that he'd begin with the people, the problem, and then move on to the solution. This book is really well-written, and I highly recommend it to anyone interested in learning more about AI, machine learning, and its implications. p.s. Thank you anonymous person on Hackernews for recommending this book!

  19. 5 out of 5

    Ken Nickerson

    Solid, sweeping view on AI/Ethics with grounding and history. I wish quality writers like Brian Christian would partner with deep practitioners in the fields; with summary, review, debate at the end of each chapter. Even if it's just comfort notes like "we're on top of this issue" or here's another more practical way to consider this conflict (e.g. resources). It's a wonderful history, philosophy work; but like many books that look to summarize a space, I wish they could ground it with technical Solid, sweeping view on AI/Ethics with grounding and history. I wish quality writers like Brian Christian would partner with deep practitioners in the fields; with summary, review, debate at the end of each chapter. Even if it's just comfort notes like "we're on top of this issue" or here's another more practical way to consider this conflict (e.g. resources). It's a wonderful history, philosophy work; but like many books that look to summarize a space, I wish they could ground it with technical depth, ideally an accompanying .GIT or chapter-by-chapter debate with folks like Rodney Brooks or Geoff Hinton to braid in the: history, philosophy, and technology into a coherent whole. Great book, and recommended.

  20. 5 out of 5

    Ojashvi

    Just finished reading The Alignment Problem by Brian Christian this morning! Well written and well researched! A lot of interesting perspectives on the intricate connection between machine leaning and human values. A great introduction to the various kinds of biases that are inherent in machine learning! Quite an approachable text— though I would have preferred a little more technical evaluations of the problems. The author does a great job of presenting the pertinent issues in context of their o Just finished reading The Alignment Problem by Brian Christian this morning! Well written and well researched! A lot of interesting perspectives on the intricate connection between machine leaning and human values. A great introduction to the various kinds of biases that are inherent in machine learning! Quite an approachable text— though I would have preferred a little more technical evaluations of the problems. The author does a great job of presenting the pertinent issues in context of their origin and current development, however often digresses. The grandeur in the tone of the book can also be quite distracting. Said that, I did enjoy this book a lot and learned quite a few things along the way!

  21. 4 out of 5

    Kevin Whitaker

    This is a solid book on its own terms but it doesn't live up to the title -- most of it is about the foundations and history of artificial intelligence (how it was developed, parallels to human and animal intelligence), where the content is interesting. There isn't as much on ethics in particular, and what's there is mostly the famous examples -- if you're already up to date on AI ethics you'll have seen it all already, and if not you'll have to wade through a lot of other stuff before you get t This is a solid book on its own terms but it doesn't live up to the title -- most of it is about the foundations and history of artificial intelligence (how it was developed, parallels to human and animal intelligence), where the content is interesting. There isn't as much on ethics in particular, and what's there is mostly the famous examples -- if you're already up to date on AI ethics you'll have seen it all already, and if not you'll have to wade through a lot of other stuff before you get there.

  22. 4 out of 5

    Lloyd Fassett

    10/30/20 Found it through the WSJ. I don't know if it's really good, but I'm interested in the subject and very much liked the last A.I. book I read. ‘The Alignment Problem’ Review: When Machines Miss the Point 10/30/20 Found it through the WSJ. I don't know if it's really good, but I'm interested in the subject and very much liked the last A.I. book I read. ‘The Alignment Problem’ Review: When Machines Miss the Point

  23. 4 out of 5

    Sayan

    Nice overview of important topics The gives a nice nontechnical overview of topics around safe AI, reinforcement learning, and fairness, with extensive bibliographic notes. I particularly enjoyed reading the compact historical overview of the evolution of ideas in RL, which occupies the middle half of the book.

  24. 4 out of 5

    Stuart

    Not as scary as I would have thought but a thoughtful look at the different issues in programming artificial intelligences to do what you want them to do, not just what you ask them to do. A bit technical for my background but an interesting look at a field that's constantly changing. Interesting tie-ins to neuroscience and human psychology as well which was really interesting. Not as scary as I would have thought but a thoughtful look at the different issues in programming artificial intelligences to do what you want them to do, not just what you ask them to do. A bit technical for my background but an interesting look at a field that's constantly changing. Interesting tie-ins to neuroscience and human psychology as well which was really interesting.

  25. 4 out of 5

    Sakares Saengkaew

    Very caught up with many modern AI technical terms and AI ethics as well. I finish this book in the Audio format and I might miss some useful figures in the book. Anyway, the author raises the current AI bias issues to the public very well.

  26. 5 out of 5

    Dan Howard

    This was a fascinating read. The author did an excellent job of describing the history, essence, and potential pitfalls of machine learning/AI in terms that are understandable to someone pretty much totally unfamiliar with the field.

  27. 5 out of 5

    James Yoon

    Great introduction and discussion of the alignment problem. Well-organized. It does a great job taking you through the history of AI efforts and the different ways alignment (model) problems occur. Excellent recognition of the importance and value of uncertainty.

  28. 5 out of 5

    Travis McKinstry

    I really enjoyed this book. There’s some heavy content, obviously geared more towards computer science, but I believe it has a lot to offer even to those not in the field. But again, the content gets heavy in some places. Overall it was a quick read

  29. 5 out of 5

    Jalen Lyle-Holmes

    Great overview of history and recent developments in machine learning with an alignment focus, told in a very engaging and three-dimensional narrative fashion.

  30. 5 out of 5

    Anna Feshchenko

    This eye-opening and thought-provoking book gives you a glimpse into the present state of affairs in the intersection of humans and AI systems, and their future mutual challenges that must be faced.

Add a review

Your email address will not be published. Required fields are marked *

Loading...
We use cookies to give you the best online experience. By using our website you agree to our use of cookies in accordance with our cookie policy.