<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="titles.xsl"?>
<record
    biblionix-libraryname="Mary Riley Styles Public Library"
    biblionix-libraryid="1263"
    biblionix-libraryusername="fallschurch"
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd"
    xmlns="http://www.loc.gov/MARC21/slim">

  <leader>02949cam a2200313 i 4500</leader>
  <controlfield tag="001">592208821</controlfield>
  <controlfield tag="003">TxAuBib</controlfield>
  <controlfield tag="005">20220908120000.0</controlfield>
  <controlfield tag="008">210915s2022||||||||||||||||||||||||eng|u</controlfield>
  <datafield tag="010" ind1=" " ind2=" ">
    <subfield code="a">2021035121</subfield>
  </datafield>
  <datafield tag="020" ind1=" " ind2=" ">
    <subfield code="a">9780262046954</subfield>
    <subfield code="q">HRD</subfield>
    <subfield code="c">29.95</subfield>
  </datafield>
  <datafield tag="020" ind1=" " ind2=" ">
    <subfield code="a">0262046954</subfield>
    <subfield code="q">HRD</subfield>
    <subfield code="c">29.95</subfield>
  </datafield>
  <datafield tag="035" ind1=" " ind2=" ">
    <subfield code="a">(OCoLC)1268543396</subfield>
  </datafield>
  <datafield tag="040" ind1=" " ind2=" ">
    <subfield code="d">TxAuBib</subfield>
    <subfield code="e">rda</subfield>
  </datafield>
  <datafield tag="100" ind1="1" ind2=" ">
    <subfield code="a">Gigerenzer, Gerd,</subfield>
    <subfield code="e">author.</subfield>
  </datafield>
  <datafield tag="245" ind1="1" ind2=" ">
    <subfield code="a">How to stay smart in a smart world</subfield>
    <subfield code="h">[BOOK] :</subfield>
    <subfield code="b">why human intelligence still beats algorithms /</subfield>
    <subfield code="c">Gerd Gigerenzer.</subfield>
  </datafield>
  <datafield tag="264" ind1=" " ind2="1">
    <subfield code="a">London, England : </subfield>
    <subfield code="b">The MIT Press, </subfield>
    <subfield code="c">[2022]</subfield>
  </datafield>
  <datafield tag="300" ind1=" " ind2=" ">
    <subfield code="a">xxii, 297 pages :</subfield>
    <subfield code="b">illustrations ;</subfield>
    <subfield code="c">24 cm.</subfield>
  </datafield>
  <datafield tag="336" ind1=" " ind2=" ">
    <subfield code="b">txt</subfield>
    <subfield code="2">rdacontent</subfield>
  </datafield>
  <datafield tag="337" ind1=" " ind2=" ">
    <subfield code="b">n</subfield>
    <subfield code="2">rdamedia</subfield>
  </datafield>
  <datafield tag="338" ind1=" " ind2=" ">
    <subfield code="b">nc</subfield>
    <subfield code="2">rdacarrier</subfield>
  </datafield>
  <datafield tag="504" ind1=" " ind2=" ">
    <subfield code="a">Includes bibliographical references (pages 229-284) and index.</subfield>
  </datafield>
  <datafield tag="505" ind1=" " ind2=" ">
    <subfield code="a">Is true love just a click away? -- What AI is best at : the stable-world principle -- Machines influence how we think of intelligence -- Are self-driving cars just down the road? -- Common sense AI -- One data point can beat big data -- Transparency -- Sleepwalking into surveillance -- The psychology of getting users hooked -- Safety and self-control -- Fact or fake?</subfield>
  </datafield>
  <datafield tag="520" ind1=" " ind2=" ">
    <subfield code="a">Doomsday prophets of technology predict that robots will take over the world, leaving humans behind in the dust. Tech industry boosters think replacing people with software might make the world a better place—while tech industry critics warn darkly about surveillance capitalism. Despite their differing views of the future, they all agree: machines will soon do everything better than humans. In How to Stay Smart in a Smart World, Gerd Gigerenzer shows why that’s not true, and tells us how we can stay in charge in a world populated by algorithms. Machines powered by artificial intelligence are good at some things (playing chess), but not others (life-and-death decisions, or anything involving uncertainty). Gigerenzer explains why algorithms often fail at finding us romantic partners (love is not chess), why self-driving cars fall prey to the Russian Tank Fallacy, and how judges and police rely increasingly on nontransparent “black box” algorithms to predict whether a criminal defendant will reoffend or show up in court. He invokes Black Mirror, considers the privacy paradox (people want privacy, but give their data away), and explains that social media get us hooked by programming intermittent reinforcement in the form of the “like” button. We shouldn’t trust smart technology unconditionally, Gigerenzer tells us, but we shouldn’t fear it unthinkingly, either.</subfield>
    <subfield code="c">Provided by publisher.</subfield>
  </datafield>
  <datafield tag="541" ind1=" " ind2=" ">
    <subfield code="d">20220908.</subfield>
  </datafield>
  <datafield tag="650" ind1=" " ind2=" ">
    <subfield code="a">Artificial intelligence</subfield>
    <subfield code="x">Social aspects.</subfield>
  </datafield>
  <datafield tag="650" ind1=" " ind2=" ">
    <subfield code="a">Expert systems (Computer science)</subfield>
    <subfield code="x">Safety measures.</subfield>
  </datafield>
  <datafield tag="650" ind1=" " ind2=" ">
    <subfield code="a">Expert systems (Computer science)</subfield>
    <subfield code="x">Risk assessment.</subfield>
  </datafield>
</record>