Read Liu Heung Shing A Life in a Sea of Red Pi Li Christopher Phillips Geoff Raby Liu Heung Shing 9783958295452 Books

By Brett Callahan on Friday, June 7, 2019

Read Liu Heung Shing A Life in a Sea of Red Pi Li Christopher Phillips Geoff Raby Liu Heung Shing 9783958295452 Books



Download As PDF : Liu Heung Shing A Life in a Sea of Red Pi Li Christopher Phillips Geoff Raby Liu Heung Shing 9783958295452 Books

Download PDF Liu Heung Shing A Life in a Sea of Red Pi Li Christopher Phillips Geoff Raby Liu Heung Shing 9783958295452 Books

This book contains the two most important bodies of work by Pulitzer Prize–winning photojournalist Liu Heung Shing (born 1951) photos that document pivotal decades of Communism in China and Russia, made between 1976 and 2017. A Life in a Sea of Red presents scenes of hope, hardship and change under―and in the aftermath of―Communist rule.
Liu arrived in Beijing in 1978 on assignment for Time magazine to photograph the country at a moment of momentous transition―from the withdrawal of Mao’s portraits from the public realm, to the increase in free commercial, artistic and personal expression, to the violence in Tiananmen Square in 1989 and, more recently, the rise of Chinese yuppies.
In contrast, Liu’s photos of Russia, taken between 1990 and 1993, document the collapse of a Communist state. The most enduring of these shows Gorbachev throwing down the speech he delivered on December 25, 1991, announcing his resignation and signaling the end of the Soviet Union and Cold War.


Read Liu Heung Shing A Life in a Sea of Red Pi Li Christopher Phillips Geoff Raby Liu Heung Shing 9783958295452 Books


""

Product details

  • Hardcover 288 pages
  • Publisher Steidl (July 23, 2019)
  • Language English
  • ISBN-10 3958295452

Read Liu Heung Shing A Life in a Sea of Red Pi Li Christopher Phillips Geoff Raby Liu Heung Shing 9783958295452 Books

Tags : Liu Heung Shing A Life in a Sea of Red [Pi Li, Christopher Phillips, Geoff Raby, Liu Heung Shing] on . This book contains the two most important bodies of work by Pulitzer Prize–winning photojournalist Liu Heung Shing (born 1951) photos that document pivotal decades of Communism in China and Russia,Pi Li, Christopher Phillips, Geoff Raby, Liu Heung Shing,Liu Heung Shing A Life in a Sea of Red,Steidl,3958295452,CHINA - HISTORY - 20TH CENTURY,China,GENERAL,General Adult,History,Non-Fiction,PHOTOGRAPHY / Individual Photographers / Monographs,PHOTOGRAPHY / Photojournalism,PHOTOJOURNALISM,POLITICAL SCIENCE / Political Ideologies / Communism, Post-Communism Socialism,Photo Techniques,Photography/Photojournalism,Political Science/Political Ideologies - Communism, Post-Communism Socialism,Western Europe

Liu Heung Shing A Life in a Sea of Red Pi Li Christopher Phillips Geoff Raby Liu Heung Shing 9783958295452 Books Reviews :


~
More aboutRead Liu Heung Shing A Life in a Sea of Red Pi Li Christopher Phillips Geoff Raby Liu Heung Shing 9783958295452 Books

Ebook StripPieced Bargello Dynamic Quilts Step by Step Judith Steele 9781604689860 Books

By Brett Callahan

Ebook StripPieced Bargello Dynamic Quilts Step by Step Judith Steele 9781604689860 Books



Download As PDF : StripPieced Bargello Dynamic Quilts Step by Step Judith Steele 9781604689860 Books

Download PDF StripPieced Bargello Dynamic Quilts Step by Step Judith Steele 9781604689860 Books

Make mesmerizing bargello quilts with the easy-to-follow instructions and charts included in Strip-Pieced Bargello. It's like quilting by number!



* Sew strips of fabric together
* Slice the strip sets into segments
* Arrange segments into swaths of movement and color
* Create the illusion of curves using only straight strips!



Start with as few as four (or as many as 24) fabrics to sew a table runner, wall hanging, tote, tree skirt, or larger quilt. Armed with Judith's tools, tips, and techniques, as well as her smart methods for staying organized, you'll be ready to make your first--or your 21st!--bargello beauty.


Ebook StripPieced Bargello Dynamic Quilts Step by Step Judith Steele 9781604689860 Books


"These quilts are really easy to piece but the effect they make is such a super WOW! A really unique quilt is a round tree skirt that makes a spiral. Lots of colors represented. There is also one tiny tote that is just adorable."

Product details

  • Paperback 80 pages
  • Publisher That Patchwork Place (March 15, 2019)
  • Language English
  • ISBN-10 1604689862

Read StripPieced Bargello Dynamic Quilts Step by Step Judith Steele 9781604689860 Books

Tags : Strip-Pieced Bargello Dynamic Quilts, Step by Step [Judith Steele] on . Make mesmerizing bargello quilts with the easy-to-follow instructions and charts included in Strip-Pieced Bargello. </i>It's like quilting by number!<br /> <br />* Sew strips of fabric together<br />* Slice the strip sets into segments<br />* Arrange segments into swaths of movement and color<br />* Create the illusion of curves using only straight strips!<br /> <br />Start with as few as four (or as many as 24) fabrics to sew a table runner,Judith Steele,Strip-Pieced Bargello Dynamic Quilts, Step by Step,That Patchwork Place,1604689862,Bargello quilting,Bargello quilting.,Patchwork,Patchwork;Patterns.,Quilting,Quilting;Patterns.,CRAFTS HOBBIES / Patchwork,CRAFTS HOBBIES / Quilts Quilting,CRAFTS HOBBIES / Sewing,Crafts Hobbies,Crafts Hobbies/Patchwork,Crafts Hobbies/Sewing,GENERAL,General Adult,Non-Fiction,bargello quilt books, bargello quilt instructions, bargello quilt books, bargello quilt designs, bargello quilt directions, diy bargello quilt, bargello quilt easy, bargello patterns quilt example, bargello quilt made easy, easiest bargello quilt,bargello quilt books; bargello quilt instructions; bargello quilt books; bargello quilt designs; bargello quilt directions; diy bargello quilt; bargello quilt easy; bargello patterns quilt example; bargello quilt made easy; easiest bargello quilt

StripPieced Bargello Dynamic Quilts Step by Step Judith Steele 9781604689860 Books Reviews :


StripPieced Bargello Dynamic Quilts Step by Step Judith Steele 9781604689860 Books Reviews


  • These quilts are really easy to piece but the effect they make is such a super WOW! A really unique quilt is a round tree skirt that makes a spiral. Lots of colors represented. There is also one tiny tote that is just adorable.
  • I can’t buy until I see the inside
More aboutEbook StripPieced Bargello Dynamic Quilts Step by Step Judith Steele 9781604689860 Books

Ebook Instant Calm 2Minute Meditations to Create a Lifetime of Happy eBook Karen Salmansohn

By Brett Callahan

Ebook Instant Calm 2Minute Meditations to Create a Lifetime of Happy eBook Karen Salmansohn





Product details

  • Print Length 112 pages
  • Publisher Ten Speed Press (August 27, 2019)
  • Publication Date August 27, 2019
  • Sold by  Digital Services LLC
  • Language English
  • ASIN B07L2H74BD




~
More aboutEbook Instant Calm 2Minute Meditations to Create a Lifetime of Happy eBook Karen Salmansohn

Ebook Benito Mussolini The Life and Legacy of Italy&rsquos Fascist Prime Minister Audible Audio Edition Charles River Editors Colin Fluxman Books

By Brett Callahan on Thursday, June 6, 2019

Ebook Benito Mussolini The Life and Legacy of Italy&rsquos Fascist Prime Minister Audible Audio Edition Charles River Editors Colin Fluxman Books



Download As PDF : Benito Mussolini The Life and Legacy of Italy&rsquos Fascist Prime Minister Audible Audio Edition Charles River Editors Colin Fluxman Books

Download PDF Benito Mussolini The Life and Legacy of Italy&rsquos Fascist Prime Minister Audible Audio Edition Charles River Editors Colin Fluxman Books

It's easy to forget how young Italy was when Benito Mussolini was born on July 29, 1883. It is hard to conceive a territory with such a long and ancient history was once young and troubled with constant conflict and instability. Similar to Germany, Italy was unified in 1861, but contrary to its northern cousin, its previous history was one of separation. Italy had no great romantic idea of a "Great Germany", keeping it unified even during the wars between city-states. 

Benito Mussolini was born and raised in a highly volatile environment where ideas already considered extreme by most contemporary observers, such as socialism, would undergo a deep and violent transformation. Mussolini would ride that wave to power, and he would hold it for decades as he opportunistically tried to strengthen Italy's position and empire. That would lead him to foreign interventions in Africa and eventually an alliance with Nazi Germany's Adolf Hitler, ultimately costing him everything and devastating his country throughout World War II. 

Mussolini's final act was an attempt to flee his fate. On April 25, 1945, he was able to move about without German interference as the Allies advanced. He wore a German uniform to hide his identity and tried to march north with retreating troops, thinking he would find a way to freedom from Germany, but an armed force of partisan troops stopped the column on April 27, 1945. Mussolini was immediately identified, captured, and briefly jailed along with his lover, Claretta Petacci. There was no great trial waiting for Mussolini and no last moment under the spotlight. The partisan troops organized a show trial to give the proceedings some sense of legality, and on April 29, 1945, they took Mussolini and Claretta out of jail. The Italian dictator was shot, along with his lover, after which their corpses were brought back to Milan's Loreto square and hung by their feet. The very next day, Hitler would commit suicide in his bunker in Berlin, and the fighting in Europe would finally come to an end a little more than a week later. 

Benito Mussolini The Life and Legacy of Italy's Fascist Prime Minister profiles one of the 20th century's most notorious leaders. You will learn about Mussolini like never before.


Ebook Benito Mussolini The Life and Legacy of Italy&rsquos Fascist Prime Minister Audible Audio Edition Charles River Editors Colin Fluxman Books


"History at it's finest.

I love history and Charles River Editors make it so easy to keep you informed! As I have aged I find that I have forgotten many things and this is a quick way to refresh my knowledge. Thank you, guys! History at it's finest."

Product details

  • Audible Audiobook
  • Listening Length 2 hours and 6 minutes
  • Program Type Audiobook
  • Version Unabridged
  • Publisher Charles River Editors
  • Audible.com Release Date February 20, 2019
  • Language English, English
  • ASIN B07NWZD1NF

Read Benito Mussolini The Life and Legacy of Italy&rsquos Fascist Prime Minister Audible Audio Edition Charles River Editors Colin Fluxman Books

Tags : Benito Mussolini The Life and Legacy of Italy’s Fascist Prime Minister (Audible Audio Edition) Charles River Editors, Colin Fluxman Books, ,Charles River Editors, Colin Fluxman,Benito Mussolini The Life and Legacy of Italy’s Fascist Prime Minister,Charles River Editors,B07NWZD1NF

Benito Mussolini The Life and Legacy of Italy&rsquos Fascist Prime Minister Audible Audio Edition Charles River Editors Colin Fluxman Books Reviews :


Benito Mussolini The Life and Legacy of Italy&rsquos Fascist Prime Minister Audible Audio Edition Charles River Editors Colin Fluxman Books Reviews


  • History at it's finest.

    I love history and Charles River Editors make it so easy to keep you informed! As I have aged I find that I have forgotten many things and this is a quick way to refresh my knowledge. Thank you, guys! History at it's finest.
  • How did Mussolini and hitler persuade the people to follow them in war when there was to be no benefit to the masses.
More aboutEbook Benito Mussolini The Life and Legacy of Italy&rsquos Fascist Prime Minister Audible Audio Edition Charles River Editors Colin Fluxman Books

Download PDF Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation Stephen D Easton 9781641050326 Books

By Brett Callahan

Download PDF Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation Stephen D Easton 9781641050326 Books



Download As PDF : Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation Stephen D Easton 9781641050326 Books

Download PDF Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation Stephen D Easton 9781641050326 Books

Opposing the Adverse Expert is a step-by-step guide to investigating, evaluating, and opposing the adverse expert in civil cases. It outlines tactics you can use to gather information about the adverse expert, both in the discovery process and on your own to

  • take effective expert depositions;
  • evaluate the adverse expert's analysis of the key issues;
  • move to exclude his testimony;
  • conduct devastating cross-examinations of incorrect experts;
  • make the most of your voir dire, opening statement, and closing argument;
  • and even to advocate powerfully about expert issues on appeal.

Purchasers of the book gain access to our website, which includes checklists you can put to immediate use in your practice, outlines for expert depositions and cross-examinations, nationwide charts of expert witness law, and state outlines with numerous citations to key expert witness rules, cases, and pattern jury instructions.


Download PDF Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation Stephen D Easton 9781641050326 Books


""

Product details

  • Paperback 952 pages
  • Publisher American Bar Association; 2 edition (April 22, 2019)
  • Language English
  • ISBN-10 1641050322

Read Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation Stephen D Easton 9781641050326 Books

Tags : Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation [Stephen D. Easton] on . <em>Opposing the Adverse Expert</em> is a step-by-step guide to investigating, evaluating, and opposing the adverse expert in civil cases. It outlines tactics you can use to gather information about the adverse expert,Stephen D. Easton,Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation,American Bar Association,1641050322,Evidence, Expert - United States,Evidence, Expert;United States.,Examination of witnesses - United States,Examination of witnesses;United States.,Trial practice - United States,Trial practice;United States.,LAW / Civil Procedure,LAW / Litigation,LAW / Trial Practice,Law,Law/Litigation,Law/Trial Practice,Law Litigation,Law Trial Practice,Legal aspects,Non-Fiction,Scholarly/Graduate,TEXT,United States

Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation Stephen D Easton 9781641050326 Books Reviews :


~
More aboutDownload PDF Opposing the Adverse Expert A Comprehensive Guide for Every Stage of Litigation Stephen D Easton 9781641050326 Books

Download PDF Superintelligence Paths Dangers Strategies Nick Bostrom 9780198739838 Books

By Brett Callahan

Download PDF Superintelligence Paths Dangers Strategies Nick Bostrom 9780198739838 Books



Download As PDF : Superintelligence Paths Dangers Strategies Nick Bostrom 9780198739838 Books

Download PDF Superintelligence Paths Dangers Strategies Nick Bostrom 9780198739838 Books

A New York Times bestseller

Superintelligence asks the questions What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.

The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.

But we have one advantage we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?

This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.

Download PDF Superintelligence Paths Dangers Strategies Nick Bostrom 9780198739838 Books


"Prof. Bostrom has written a book that I believe will become a classic within that subarea of Artificial Intelligence (AI) concerned with the existential dangers that could threaten humanity as the result of the development of artificial forms of intelligence.

What fascinated me is that Bostrom has approached the existential danger of AI from a perspective that, although I am an AI professor, I had never really examined in any detail.

When I was a graduate student in the early 80s, studying for my PhD in AI, I came upon comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) in which they mused that, if an artificially intelligent entity could improve its own design, then that improved version could generate an even better design, and so on, resulting in a kind of "chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved "superintelligence". This chain-reaction problem is the one that Bostrom focusses on.

He sees three main paths to superintelligence:

1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.

2. The Whole Brain Emulation path -- Imagine that you are near death. You agree to have your brain frozen and then cut into millions of thin slices. Banks of computer-controlled lasers are then used to reconstruct your connectome (i.e., how each neuron is linked to other neurons, along with the microscopic structure of each neuron's synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body. If your memories, thoughts and capabilities arise from the connectivity structure and patterns/timings of neural firings of your brain, then your consciousness should awaken in that synthetic body.

The beauty of this approach is that humanity would not have to understand how the brain works. It would simply have to copy the structure of a given brain (to a sufficient level of molecular fidelity and precision).

3. The Neuromorphic path -- In this case, neural network modeling and brain emulation techniques would be combined with AI technologies to produce a hybrid form of artificial intelligence. For example, instead of copying a particular person's brain with high fidelity, broad segments of humanity's overall connectome structure might be copied and then combined with other AI technologies.

Although Bostrom's writing style is quite dense and dry, the book covers a wealth of issues concerning these 3 paths, with a major focus on the control problem. The control problem is the following: How can a population of humans (each whose intelligence is vastly inferior to that of the superintelligent entity) maintain control over that entity? When comparing our intelligence to that of a superintelligent entity, it will be (analogously) as though a bunch of, say, dung beetles are trying to maintain control over the human (or humans) that they have just created.

Bostrom makes many interesting points throughout his book. For example, he points out that a superintelligence might very easily destroy humanity even when the primary goal of that superintelligence is to achieve what appears to be a completely innocuous goal. He points out that a superintelligence would very likely become an expert at dissembling -- and thus able to fool its human creators into thinking that there is nothing to worry about (when there really is).

I find Bostrom's approach refreshing because I believe that many AI researchers have been either unconcerned with the threat of AI or they have focussed only on the threat to humanity once a large population of robots is pervasive throughout human society.

I have taught Artificial Intelligence at UCLA since the mid-80s (with a focus on how to enable machines to learn and comprehend human language). In my graduate classes I cover statistical, symbolic, machine learning, neural and evolutionary technologies for achieving human-level semantic processing within that subfield of AI referred to as Natural Language Processing (NLP). (Note that human "natural" languages are very very different from artificially created technical languages, such a mathematical, logical or computer programming languages.)

Over the years I have been concerned with the dangers posed by "run-away AI" but my colleagues, for the most part, seemed largely unconcerned. For example, consider a major introductory text in AI by Stuart Russell and Peter Norvig, titled: Artificial Intelligence: A Modern Approach (3rd ed), 2010. In the very last section of that book Norvig and Russell briefly mention that AI could threaten human survival; however, they conclude: "But, so far, AI seems to fit in with other revolutionary technologies (printing, plumbing, air travel, telephone) whose negative repercussions are outweighed by their positive aspects" (p. 1052).

In contrast, my own view has been that artificially intelligent, synthetic entities will come to dominate and replace humans, probably within 2 to 3 centuries (or less). I imagine three (non-exclusive) scenarios in which autonomous, self-replicating AI entities could arise and threaten their human creators.

(1) The Robotic Space-Travel scenario: In this scenario, autonomous robots are developed for space travel and asteroid mining. Unfortunately, many people believe in the alternative "Star Trek" scenario, which assumes that: (a) faster-than-light (warp drive) will be developed and (b) the galaxy will be teeming, not only with planets exactly like Earth, but also these planets will be lacking any type of microscopic life-forms dangerous to humans. In the Star Trek scenario, humans are very successful space travelers.

However, It is much more likely that, to make it to a nearby planet, say, 100 light years away, will require that humans travel for a 1000 years (at 1/10th the speed of light) in a large metal container, all the while trying to maintain a civilized society as they are being constantly radiated while they move about within a weak gravitational field (so their bones waste away while they constantly recycle and drink their urine). When their distant descendants finally arrive at the target planet, these descendants will very likely discover that the target planet is teeming with deadly, microscopic parasites.

Humans have evolved on the surface of the Earth and thus their major source of energy is oxygen. To survive they must carry their environment around with them. In contrast, synthetic entities will require no oxygen or gravity. They will not be alive (in the biological sense) and so therefore will not have to expend any energy during the voyage. A simple clock can turn them on once they have arrived at the target planet and they will be unaffected by any forms of alien microbial life.

If there were ever a conflict between humans and these space-traveling synthetic AI entities, who would have the advantage? The synthetic entities would be looking down on us from outer space -- a definitive advantage. (If an intelligent alien ever visits Earth, it is 99.9999% likely that whatever exits the alien spacecraft will be a non-biological, synthetic entity -- mainly because space travel is just too difficult for biological creatures.)

(2) The Robotic Warfare scenario: No one wants their (human) soldiers to die on the battlefield. A population of intelligent robots that are designed to kill humans will solve this problem. Unfortunately, if control over such warrior robots is ever lost, then this could spell disaster for humanity.

(3) The Increased Dependency scenario: Even if we wanted to, it is already impossible to eliminate computers because we are so dependent on them. Without computers our financial, transportation, communication and manufacturing services would grind to a halt. Imagine a near-future society in which robots perform most of the services now performed by humans and in which the design and manufacture of robots are handled also by robots. Assume that, at some point, a new design results in robots that no longer obey their human masters. The humans decide to shut off power to the robotic factory but it turns out that the hydroelectric plant (that supplies it with power) is run by robots made at that same factory. So now the humans decide to halt all trucks that deliver materials to the factory, but it turns out that those trucks are driven by robots, and so on.

I had always thought that, for AI technology to pose an existential danger to humanity, it would require processes of robotic self-replication. In the Star Trek series, the robot Data is more intelligence that many of his human colleagues, but he has no desire to make millions of copies of himself, and therefore he poses less of a threat than, say, south american killer bees (which have been unstoppable as they have spread northward).

Once synthetic entities have a desire to improve their own designs and to reproduce themselves, then they will have many advantages over humans: Here are just a few:

1. Factory-style replication: Humans require approximately 20 years to produce a functioning adult human. In contrast, a robotic factory could generate hundreds of robots every day. The closest event to human-style (biological) replication will occur each time a subset of those robots travel to a new location to set up a new robotic factory.

2. Instantaneous learning: Humans have always dreamt of a "learning pill" but, instead, they have to undergo that time-consuming process called "education". Imagine if one could learn how to fly a plane just by swallowing a pill. Synthetic entities would have this capability. The brains of synthetic entities will consist of software that executes on universal computer hardware. As a result, each robot will be able to download additional software/data to instantly obtain new knowledge and capabilities.

3. Telepathic communication: Two robots will be able communicate by radio waves, with robot R1 directly transmitting some capability (e.g., data and/or algorithms learned through experience) to another robot R2.

4. Immortality: A robot could back up a copy of its mind (onto some storage device) every week. If the robot were destroyed, a new version could be reconstructed with just the loss of one week's worth of memory.

5. Harsh Environments: Humans have developed clothing in order to be able to survive in cold environments. We go into a closet and select thermal leggings, gloves, goggles, etc. to go snowboarding. In contrast, a synthetic entity could go into its closet and select an alternative, entire synthetic body (for survival on different planets with different gravitational fields and atmospheres).

What is fascinating about Bostrom's book is that he does not emphasize any of the above. Instead, he focusses his book on the dangers, not from a society of robots more capable than humans, but, instead, on the dangers posed by a single entity with superintelligence coming about. (He does consider what he calls the "multipolar" scenario, but that is just the case of a small number of competing superintelligent entities.)

Bostrom is a professor of philosophy at Oxford University and so the reader is also treated to issues in morality, economics, utility theory, politics, value learning and more.

I have always been pessimistic about humanity's chance of avoiding destruction at the hands of it future AI creations and Bostrom's book focusses on the many challenges that humanity may (soon) be facing as the development of a superintelligence becomes more and more likely.

However, I would like to point out one issue that I think Prof. Bostrom mostly overlooks. The issue is Natural Language Processing (NLP). He allocates only two sentences to NLP in his entire book. His mention of natural language occurs in Chapter 13, in his section on "Morality models". Here he considers that, when giving descriptions to the superintelligence (of how we want it to behave), its ability to understand and carry out these descriptions may require that it comprehend human language, for example, the term "morally right".

He states:

"The path to endowing an AI with any of these concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by 'morally right' " (p. 218)

I fear that Bostrom has not sufficiently appreciated the requirements of natural language comprehension and generation for achieving general machine intelligence. I don't believe that an AI entity will pose an existential threat until it has achieved at least a human level of natural language processing (NLP).

Human-level consciousness is different than animal-level consciousness because humans are self-aware. They not only think thoughts about the world; they also think thoughts about the fact that they are thinking thoughts. They not only use specific words; they are aware of the fact that they are using words and how different categories of words differ in functionality. They are not only capable of following rules; they are aware of the fact that rules exist and that they are able to follow (or not follow) those rules. Humans are able to invent and modify rules.

Language is required to achieve this level of self-reflective thought and creativity. I define (human-level natural) language as any system in which the internal structures of thought (whatever those happen to be, whether probabilities or vectorial patterns or logic/rule structures or dynamical attractors or neural firing patterns, etc.) are mapped onto external structures -- ones that can then be conveyed to others.

Self-awareness arises because this mapping enables the existence of a dual system:
Internal (Thought) Structures <---> External (Language) Structures.

In the case of human language, these external structures are symbolic. This dual system enables an intelligent entity to take the results of its thought processes, map them to symbols and then use these symbols to trigger thoughts in other intelligent entities (or in oneself). An entity with human-level self-awareness can hold a kind of conversation with itself, in which it can refer to and thus think about its own thinking.

Something like NLP must therefore exist BEFORE machines can reach a level of self-awareness to pose a threat to humanity. In the case of a super-intelligence, this dual system may look different than human language. For example, a superintelligence might map internal thoughts, not only to symbols of language, but also to complex vectorial structures. But the point is the same -- something must act like an external, self-referential system -- a system than can externally refer to the thoughts and processes of that system itself.

In the case of humans, we do not have access to the internal structure of our own thoughts. But that doesn't matter. What matters is that we can map aspects of our thoughts out to external, symbolic structures. We can then communicate these structures to others (and also back to ourselves). Words/sentences of language can then trigger thoughts about the world, about ourselves, about our goals, our plans, our capabilities, about conflicts with others, about potential future events, about past events, etc.

Bostrom seems to imply (by his oversight) that human-level (and super-human levels) of general intelligence can arise without language. I think this is highly unlikely.

An AI system with NLP capability makes the control problem much more difficult than even Bostrom claims. Consider a human H1 who kills others because he believes that God has commanded him to kill those with different beliefs. Since he has human-level self-awareness, he should be explicitly aware of his own beliefs. If H1 is sufficiently intelligent then we should be able to communicate a counterfactual to H1 of the sort: "If you did not believe in God or if you did not believe that God commanded you to kill infidels, then you would not kill them." That is, H1 should have access (via language) to his own beliefs and to knowledge into how changes in those beliefs might (hypothetically) change his own behavior.

It is this language capability that enables a person to change their own beliefs (and goals, and plans) over time. It is the combination of the self-reflective nature of human language, combined with human learning abilities, that makes it extremely difficulty to both predict and control what humans will end up believing and/or desiring (let alone superintelligent entities)

It is extremely difficult but (hopefully) not impossible to control a self-aware entity. Consider two types of psychiatric patients: P1 and P2. Both have a compulsion to wash their hands continuously. P1 has what doctors call "insight" into his own condition. P1 states: "I know I am suffering from an obsessive/compulsive trait. I don't want to keep washing my hands but I can't help myself and I am hoping that you, the doctors, will cure me." In contrast, patient P2 lacks "insight" and states: "I'm fine. I wash my hands all the time because it's the only way to make be sure that they are not covered with germs."

If we were asked which patient appears more intelligent (all other things being equal) we would choose P1 as being more intelligent than P2 because P1 is aware of features of P1's own thinking processes (that P2 is not aware of).

As a superintelligent entity becomes more and more superintelligent, it will have more and more awareness of its own mental processes. With increased self-reflection it will become more and more autonomous and less able to be controlled. LIke humans, it will have to be persuaded to believe in something (or to take a certain course of action). Also, this superintelligent entity will be designing even more self-aware versions of itself. Increased intelligence and increased self-reflection go hand in hand. Monkeys don't persuade humans because monkeys lack the ability to refer to the concepts that humans are able to entertain. To a superintelligent entity we will be as persuasive as monkeys (and probably much less persuasive) .

Any superintelligent entity that incorporates human general intelligence will exhibit what is commonly referred to as "free will". Personally, I do not believe that my choices are made "freely". That is, my neurons fire -- not because they choose to, but because they had to (due to the laws of physics and biochemistry). But let us define "free will" as any deterministic system with the following components/capabilities:

a. The NLP ability to understand and generate words/sentences that refer to its own thoughts and thought processes, e.g. to be able to discuss the meaning of the word "choose".

b. Ability to generate hypothetical, possible futures before taking an action and also, ability to generate hypothetical, alternative pasts after having taken that action.

c. Ability to think/express counterfactual thoughts, such as "Even though I chose action AC1, I could have instead chosen AC2, and if I had done so, then the following alternative future (XYZ) would likely have occurred."

Such as system (although each component is deterministic and so does not violate the laws of physics) will subjectively experience having "free will". I believe that a superintelligence will have this kind of "free will" -- in spades.

Given all the recent advances in AI (e.g. autonomous vehicles, object recognition learning by deep neural networks, world master-level play at the game of Jeopardy by the Watson program, etc.) I think that Bostrom's book is very timely.

Michael Dyer"

Product details

  • Paperback 390 pages
  • Publisher Oxford University Press; Reprint edition (May 1, 2016)
  • Language English
  • ISBN-10 9780198739838
  • ISBN-13 978-0198739838
  • ASIN 0198739834

Read Superintelligence Paths Dangers Strategies Nick Bostrom 9780198739838 Books

Tags : Superintelligence Paths, Dangers, Strategies [Nick Bostrom] on . <strong></strong><strong><em>A New York Times bestseller</em></strong><strong></strong> Superintelligence asks the questions What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life. The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence,Nick Bostrom,Superintelligence Paths, Dangers, Strategies,Oxford University Press,0198739834,Artificial intelligence;Moral and ethical aspects.,Artificial intelligence;Philosophy.,Artificial intelligence;Social aspects.,ARTIFICIAL INTELLIGENCE,COMPUTERS / Intelligence (AI) Semantics,Computer Books General,Computer/General,Computers,Computers - General Information,General Adult,Great Britain/British Isles,Intelligence (AI) Semantics,Non-Fiction,UNIVERSITY PRESS

Superintelligence Paths Dangers Strategies Nick Bostrom 9780198739838 Books Reviews :


Superintelligence Paths Dangers Strategies Nick Bostrom 9780198739838 Books Reviews


  • Prof. Bostrom has written a book that I believe will become a classic within that subarea of Artificial Intelligence (AI) concerned with the existential dangers that could threaten humanity as the result of the development of artificial forms of intelligence.

    What fascinated me is that Bostrom has approached the existential danger of AI from a perspective that, although I am an AI professor, I had never really examined in any detail.

    When I was a graduate student in the early 80s, studying for my PhD in AI, I came upon comments made in the 1960s (by AI leaders such as Marvin Minsky and John McCarthy) in which they mused that, if an artificially intelligent entity could improve its own design, then that improved version could generate an even better design, and so on, resulting in a kind of "chain-reaction explosion" of ever-increasing intelligence, until this entity would have achieved "superintelligence". This chain-reaction problem is the one that Bostrom focusses on.

    He sees three main paths to superintelligence

    1. The AI path -- In this path, all current (and future) AI technologies, such as machine learning, Bayesian networks, artificial neural networks, evolutionary programming, etc. are applied to bring about a superintelligence.

    2. The Whole Brain Emulation path -- Imagine that you are near death. You agree to have your brain frozen and then cut into millions of thin slices. Banks of computer-controlled lasers are then used to reconstruct your connectome (i.e., how each neuron is linked to other neurons, along with the microscopic structure of each neuron's synapses). This data structure (of neural connectivity) is then downloaded onto a computer that controls a synthetic body. If your memories, thoughts and capabilities arise from the connectivity structure and patterns/timings of neural firings of your brain, then your consciousness should awaken in that synthetic body.

    The beauty of this approach is that humanity would not have to understand how the brain works. It would simply have to copy the structure of a given brain (to a sufficient level of molecular fidelity and precision).

    3. The Neuromorphic path -- In this case, neural network modeling and brain emulation techniques would be combined with AI technologies to produce a hybrid form of artificial intelligence. For example, instead of copying a particular person's brain with high fidelity, broad segments of humanity's overall connectome structure might be copied and then combined with other AI technologies.

    Although Bostrom's writing style is quite dense and dry, the book covers a wealth of issues concerning these 3 paths, with a major focus on the control problem. The control problem is the following How can a population of humans (each whose intelligence is vastly inferior to that of the superintelligent entity) maintain control over that entity? When comparing our intelligence to that of a superintelligent entity, it will be (analogously) as though a bunch of, say, dung beetles are trying to maintain control over the human (or humans) that they have just created.

    Bostrom makes many interesting points throughout his book. For example, he points out that a superintelligence might very easily destroy humanity even when the primary goal of that superintelligence is to achieve what appears to be a completely innocuous goal. He points out that a superintelligence would very likely become an expert at dissembling -- and thus able to fool its human creators into thinking that there is nothing to worry about (when there really is).

    I find Bostrom's approach refreshing because I believe that many AI researchers have been either unconcerned with the threat of AI or they have focussed only on the threat to humanity once a large population of robots is pervasive throughout human society.

    I have taught Artificial Intelligence at UCLA since the mid-80s (with a focus on how to enable machines to learn and comprehend human language). In my graduate classes I cover statistical, symbolic, machine learning, neural and evolutionary technologies for achieving human-level semantic processing within that subfield of AI referred to as Natural Language Processing (NLP). (Note that human "natural" languages are very very different from artificially created technical languages, such a mathematical, logical or computer programming languages.)

    Over the years I have been concerned with the dangers posed by "run-away AI" but my colleagues, for the most part, seemed largely unconcerned. For example, consider a major introductory text in AI by Stuart Russell and Peter Norvig, titled Artificial Intelligence A Modern Approach (3rd ed), 2010. In the very last section of that book Norvig and Russell briefly mention that AI could threaten human survival; however, they conclude "But, so far, AI seems to fit in with other revolutionary technologies (printing, plumbing, air travel, telephone) whose negative repercussions are outweighed by their positive aspects" (p. 1052).

    In contrast, my own view has been that artificially intelligent, synthetic entities will come to dominate and replace humans, probably within 2 to 3 centuries (or less). I imagine three (non-exclusive) scenarios in which autonomous, self-replicating AI entities could arise and threaten their human creators.

    (1) The Robotic Space-Travel scenario In this scenario, autonomous robots are developed for space travel and asteroid mining. Unfortunately, many people believe in the alternative "Star Trek" scenario, which assumes that (a) faster-than-light (warp drive) will be developed and (b) the galaxy will be teeming, not only with planets exactly like Earth, but also these planets will be lacking any type of microscopic life-forms dangerous to humans. In the Star Trek scenario, humans are very successful space travelers.

    However, It is much more likely that, to make it to a nearby planet, say, 100 light years away, will require that humans travel for a 1000 years (at 1/10th the speed of light) in a large metal container, all the while trying to maintain a civilized society as they are being constantly radiated while they move about within a weak gravitational field (so their bones waste away while they constantly recycle and drink their urine). When their distant descendants finally arrive at the target planet, these descendants will very likely discover that the target planet is teeming with deadly, microscopic parasites.

    Humans have evolved on the surface of the Earth and thus their major source of energy is oxygen. To survive they must carry their environment around with them. In contrast, synthetic entities will require no oxygen or gravity. They will not be alive (in the biological sense) and so therefore will not have to expend any energy during the voyage. A simple clock can turn them on once they have arrived at the target planet and they will be unaffected by any forms of alien microbial life.

    If there were ever a conflict between humans and these space-traveling synthetic AI entities, who would have the advantage? The synthetic entities would be looking down on us from outer space -- a definitive advantage. (If an intelligent alien ever visits Earth, it is 99.9999% likely that whatever exits the alien spacecraft will be a non-biological, synthetic entity -- mainly because space travel is just too difficult for biological creatures.)

    (2) The Robotic Warfare scenario No one wants their (human) soldiers to die on the battlefield. A population of intelligent robots that are designed to kill humans will solve this problem. Unfortunately, if control over such warrior robots is ever lost, then this could spell disaster for humanity.

    (3) The Increased Dependency scenario Even if we wanted to, it is already impossible to eliminate computers because we are so dependent on them. Without computers our financial, transportation, communication and manufacturing services would grind to a halt. Imagine a near-future society in which robots perform most of the services now performed by humans and in which the design and manufacture of robots are handled also by robots. Assume that, at some point, a new design results in robots that no longer obey their human masters. The humans decide to shut off power to the robotic factory but it turns out that the hydroelectric plant (that supplies it with power) is run by robots made at that same factory. So now the humans decide to halt all trucks that deliver materials to the factory, but it turns out that those trucks are driven by robots, and so on.

    I had always thought that, for AI technology to pose an existential danger to humanity, it would require processes of robotic self-replication. In the Star Trek series, the robot Data is more intelligence that many of his human colleagues, but he has no desire to make millions of copies of himself, and therefore he poses less of a threat than, say, south american killer bees (which have been unstoppable as they have spread northward).

    Once synthetic entities have a desire to improve their own designs and to reproduce themselves, then they will have many advantages over humans Here are just a few

    1. Factory-style replication Humans require approximately 20 years to produce a functioning adult human. In contrast, a robotic factory could generate hundreds of robots every day. The closest event to human-style (biological) replication will occur each time a subset of those robots travel to a new location to set up a new robotic factory.

    2. Instantaneous learning Humans have always dreamt of a "learning pill" but, instead, they have to undergo that time-consuming process called "education". Imagine if one could learn how to fly a plane just by swallowing a pill. Synthetic entities would have this capability. The brains of synthetic entities will consist of software that executes on universal computer hardware. As a result, each robot will be able to download additional software/data to instantly obtain new knowledge and capabilities.

    3. Telepathic communication Two robots will be able communicate by radio waves, with robot R1 directly transmitting some capability (e.g., data and/or algorithms learned through experience) to another robot R2.

    4. Immortality A robot could back up a copy of its mind (onto some storage device) every week. If the robot were destroyed, a new version could be reconstructed with just the loss of one week's worth of memory.

    5. Harsh Environments Humans have developed clothing in order to be able to survive in cold environments. We go into a closet and select thermal leggings, gloves, goggles, etc. to go snowboarding. In contrast, a synthetic entity could go into its closet and select an alternative, entire synthetic body (for survival on different planets with different gravitational fields and atmospheres).

    What is fascinating about Bostrom's book is that he does not emphasize any of the above. Instead, he focusses his book on the dangers, not from a society of robots more capable than humans, but, instead, on the dangers posed by a single entity with superintelligence coming about. (He does consider what he calls the "multipolar" scenario, but that is just the case of a small number of competing superintelligent entities.)

    Bostrom is a professor of philosophy at Oxford University and so the reader is also treated to issues in morality, economics, utility theory, politics, value learning and more.

    I have always been pessimistic about humanity's chance of avoiding destruction at the hands of it future AI creations and Bostrom's book focusses on the many challenges that humanity may (soon) be facing as the development of a superintelligence becomes more and more likely.

    However, I would like to point out one issue that I think Prof. Bostrom mostly overlooks. The issue is Natural Language Processing (NLP). He allocates only two sentences to NLP in his entire book. His mention of natural language occurs in Chapter 13, in his section on "Morality models". Here he considers that, when giving descriptions to the superintelligence (of how we want it to behave), its ability to understand and carry out these descriptions may require that it comprehend human language, for example, the term "morally right".

    He states

    "The path to endowing an AI with any of these concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by 'morally right' " (p. 218)

    I fear that Bostrom has not sufficiently appreciated the requirements of natural language comprehension and generation for achieving general machine intelligence. I don't believe that an AI entity will pose an existential threat until it has achieved at least a human level of natural language processing (NLP).

    Human-level consciousness is different than animal-level consciousness because humans are self-aware. They not only think thoughts about the world; they also think thoughts about the fact that they are thinking thoughts. They not only use specific words; they are aware of the fact that they are using words and how different categories of words differ in functionality. They are not only capable of following rules; they are aware of the fact that rules exist and that they are able to follow (or not follow) those rules. Humans are able to invent and modify rules.

    Language is required to achieve this level of self-reflective thought and creativity. I define (human-level natural) language as any system in which the internal structures of thought (whatever those happen to be, whether probabilities or vectorial patterns or logic/rule structures or dynamical attractors or neural firing patterns, etc.) are mapped onto external structures -- ones that can then be conveyed to others.

    Self-awareness arises because this mapping enables the existence of a dual system
    Internal (Thought) Structures <---> External (Language) Structures.

    In the case of human language, these external structures are symbolic. This dual system enables an intelligent entity to take the results of its thought processes, map them to symbols and then use these symbols to trigger thoughts in other intelligent entities (or in oneself). An entity with human-level self-awareness can hold a kind of conversation with itself, in which it can refer to and thus think about its own thinking.

    Something like NLP must therefore exist BEFORE machines can reach a level of self-awareness to pose a threat to humanity. In the case of a super-intelligence, this dual system may look different than human language. For example, a superintelligence might map internal thoughts, not only to symbols of language, but also to complex vectorial structures. But the point is the same -- something must act like an external, self-referential system -- a system than can externally refer to the thoughts and processes of that system itself.

    In the case of humans, we do not have access to the internal structure of our own thoughts. But that doesn't matter. What matters is that we can map aspects of our thoughts out to external, symbolic structures. We can then communicate these structures to others (and also back to ourselves). Words/sentences of language can then trigger thoughts about the world, about ourselves, about our goals, our plans, our capabilities, about conflicts with others, about potential future events, about past events, etc.

    Bostrom seems to imply (by his oversight) that human-level (and super-human levels) of general intelligence can arise without language. I think this is highly unlikely.

    An AI system with NLP capability makes the control problem much more difficult than even Bostrom claims. Consider a human H1 who kills others because he believes that God has commanded him to kill those with different beliefs. Since he has human-level self-awareness, he should be explicitly aware of his own beliefs. If H1 is sufficiently intelligent then we should be able to communicate a counterfactual to H1 of the sort "If you did not believe in God or if you did not believe that God commanded you to kill infidels, then you would not kill them." That is, H1 should have access (via language) to his own beliefs and to knowledge into how changes in those beliefs might (hypothetically) change his own behavior.

    It is this language capability that enables a person to change their own beliefs (and goals, and plans) over time. It is the combination of the self-reflective nature of human language, combined with human learning abilities, that makes it extremely difficulty to both predict and control what humans will end up believing and/or desiring (let alone superintelligent entities)

    It is extremely difficult but (hopefully) not impossible to control a self-aware entity. Consider two types of psychiatric patients P1 and P2. Both have a compulsion to wash their hands continuously. P1 has what doctors call "insight" into his own condition. P1 states "I know I am suffering from an obsessive/compulsive trait. I don't want to keep washing my hands but I can't help myself and I am hoping that you, the doctors, will cure me." In contrast, patient P2 lacks "insight" and states "I'm fine. I wash my hands all the time because it's the only way to make be sure that they are not covered with germs."

    If we were asked which patient appears more intelligent (all other things being equal) we would choose P1 as being more intelligent than P2 because P1 is aware of features of P1's own thinking processes (that P2 is not aware of).

    As a superintelligent entity becomes more and more superintelligent, it will have more and more awareness of its own mental processes. With increased self-reflection it will become more and more autonomous and less able to be controlled. LIke humans, it will have to be persuaded to believe in something (or to take a certain course of action). Also, this superintelligent entity will be designing even more self-aware versions of itself. Increased intelligence and increased self-reflection go hand in hand. Monkeys don't persuade humans because monkeys lack the ability to refer to the concepts that humans are able to entertain. To a superintelligent entity we will be as persuasive as monkeys (and probably much less persuasive) .

    Any superintelligent entity that incorporates human general intelligence will exhibit what is commonly referred to as "free will". Personally, I do not believe that my choices are made "freely". That is, my neurons fire -- not because they choose to, but because they had to (due to the laws of physics and biochemistry). But let us define "free will" as any deterministic system with the following components/capabilities

    a. The NLP ability to understand and generate words/sentences that refer to its own thoughts and thought processes, e.g. to be able to discuss the meaning of the word "choose".

    b. Ability to generate hypothetical, possible futures before taking an action and also, ability to generate hypothetical, alternative pasts after having taken that action.

    c. Ability to think/express counterfactual thoughts, such as "Even though I chose action AC1, I could have instead chosen AC2, and if I had done so, then the following alternative future (XYZ) would likely have occurred."

    Such as system (although each component is deterministic and so does not violate the laws of physics) will subjectively experience having "free will". I believe that a superintelligence will have this kind of "free will" -- in spades.

    Given all the recent advances in AI (e.g. autonomous vehicles, object recognition learning by deep neural networks, world master-level play at the game of Jeopardy by the Watson program, etc.) I think that Bostrom's book is very timely.

    Michael Dyer
  • I love the general idea of evaluating the potential perils of artificial super intelligence, and I buy into the concept of thinking this through at an abstract level, not tied to the current state of AI algorithms in today's computer science. That's what this book does - systematically explore every branch of a pretty large decision tree around everything that could or could not happen when an artificial intelligence starts developing super-intelligence, and how we should deal with it. So, conceptually cool. But practically, in the case of this book, not very interesting. For a couple of reasons.

    First, the level of abstraction really is taken to an extreme. Forget about any relation between arguments in this book and anything we've actually been able to do in AI research today. You won't find a discussion of a single algorithm or even exploration of higher-level mathematical properties of existing algorithms in this book. As a result, this book could have been written 30 years ago, and its arguments wouldn't be any different. Fine, I guess (the author after all is a philosophy professor, not a computer scientist); but I found this lacking at times. It gets particularly boring when the author actually does spend pages over pages on introducing a framework on how our AI algorithms could improve (through speed improvement, or quality improvement, etc.) - but still doesn't tie it to anything concrete. If you want to take the abstraction high road, just dispense with super generalized frameworks like this altogether and get to the point. Similar to the discussion of where the recalcitrance of a future AI will come from, whether from software, content or hardware purely abstract and speculative, even though there are real-world examples of hardware evolution speed outpacing software design speed and the other way around (e.g., the troubles of electronic design automation keeping up with Moore's Law).

    Second, even if you operate fully in the realm of speculation, at least make that speculation tangible and interesting. A list of things an AI could be good at lists stuff like "social persuasion" (= convince governments to do something, and hack the internet). Struck me a lot of times as the kind of ideas you'd come up with if you thought about a particular scenario for a few minutes over a beer with friends. Very few counterintuitive ideas in there. One chapter grandly announces the presentation of an elaborate "takeover scenario", i.e., how would a superintelligence actually take over the world - and again it remains completely abstract and not original or practical. ("AI becomes smart, starts improving itself, takes over the world" - couldn't have guessed it myself.)

    Third, a lot of the inferences in the book struck me as nothing more than one-step inferences, making it a relatively shallow brainstorming-type book. ("This could happen, and also this other thing could happen, and this third thing as well.") Systematic exploration of a large decision tree gets interesting when you start combining lots of different scenarios in counter-intuitive ways. Again the "friends over a beer" problem. At times the philosophizing in some chapters reads like a mildly interesting Star Trek episode (such as the one about how to best set goals for an AI so that it acts morally and doesn't kill us). In the best and worst ways.

    But every now and then, there's a clever historical analogy, and an interesting idea. Ronald Reagan wasn't willing to share the technology on how to efficiently milk cows, but he offered to share SDI with the USSR - how would AI be shared? Or, the insight that the difference between the dumbest and smartest human alive is tiny on a total intelligence scale (from IQ 75 to IQ 180) - and that this means that an AI would likely look to humans as if it very suddenly leapt from being really dumb to unbelievably smart and bridge this tiny human intelligence gap extremely quickly. But what struck me with regards to the best ideas in the book is that the book almost always quotes just one guy, Eliezer Yudkovsky... which made me think that if I wanted to read a thought-provoking, counter-intuitive book on AI super intelligence (as opposed to a treatise that appears to at times gloss over the shallowness of its ideas by making up with long text), I should just go and read Yudkovsky.

    All in all though, the topic itself is so interesting that it's worth giving the book a try.
  • The title looked good, so did the cover. Some of the reviewers are impressive. I work in large IT systems, AI, Robotics and devices. I found this book to be too much 'What if this happened...' Watch the movies iRobot, Transformers, Terminator, and 1984. Then summarize the movies and you have this book. The books point is we have to watch out for the time (now?) when humans create an intelligent system that dwarfs us and takes over. I didn't read anything new that most sci-fi movies haven't covered related to technology ethics. However, the way this is written there is so much what if and speculation, that reading it becomes tiresome. Sorry, didn't enjoy the book.
More aboutDownload PDF Superintelligence Paths Dangers Strategies Nick Bostrom 9780198739838 Books

Ebook Nikon D3500 Mode d'emploi 9782412045176 Books

By Brett Callahan

Ebook Nikon D3500 Mode d'emploi 9782412045176 Books





Product details

  • Paperback
  • Publisher First (April 4, 2019)
  • Language French
  • ISBN-10 2412045178




~
More aboutEbook Nikon D3500 Mode d'emploi 9782412045176 Books