Computers

Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/common.inc).

Code Dependent: Living in the Shadow of AI – review

Published by Anonymous (not verified) on Thu, 09/05/2024 - 6:18pm in

In Code DependentMadhumita Murgia considers the impact of AI, and technology more broadly, on marginalised groups. Though its case studies are compelling, Marie Oldfield finds the book lacking in rigorous analysis and a clear methodology, inhibiting its ability to grapple with the concerns around technology it raises.

Madhumita Murgia spoke at an LSE event, What it means to be human in a world changed by AI, in March 2024 – watch it back on YouTube.

Code Dependent: Living in the Shadow of AI. Madhumita Murgia. Picador. 2024.

Code Dependent Book coverCode Dependent is a collection of case studies about people from marginalised groups in society who both work in and are negatively affected by technology. However, the book’s arguments pertain to subjects such as worker and refugee rights and global economies rather than the artificial intelligence (AI) of its title. It lacks a unifying thread and the initial chapters do not set up the purpose or main theme of the book. A clearer view is eventually provided on page 267, ie, “the pattern that has emerged for me is the extent of the impact of AI on society’s marginalised and excluded groups; refugees migrants, precarious workers, socioeconomic and racial minorities and women”.

Beyond algorithms, aggregated data and interconnected databases are one of the most concerning and problematic ways to use data.

Beyond algorithms, aggregated data and interconnected databases are one of the most concerning and problematic ways to use data. This suggests that unfit for purpose predictive analytics may be used for incorrect policing and manipulation of the public. We see social media manipulation of the public openly stated in manifestos from governments to world organisations and defence bodies under the auspices of “keeping people safe” or “protecting resources”. The author touches on this in the chapter “Your Rights”, which discusses nefarious uses of facial recognition software and how Meta was sued for their social media algorithm potentially facilitating murders in Ethiopia. The case study illustrates the dark side of technology, showing how technology can easily be used for oppression. However, this chapter, like many of the others, feels light in detail and analysis when its subject matter could easily warrant its own book.

The book contains a number of fundamental flaws that detract from the compelling nature of its case studies. The lack of a clear methodology, justifications for the choice of subjects examined and an outline of the book’s purpose immediately limit the reader’s ability to access the material effectively. There is a lack of prerequisite knowledge of philosophical and technical principles inherent in AI development that inhibits the author’s capacity to grasp the human experiences discussed or connect them to AI in a meaningful way. Some of the more concerning failings were several statements about technology that are either incorrect or unexplained, as well as strong contradictions within the material itself. For example, the concept of “algorithm” is never defined, despite being key to the text and the term “clean data set” is misinterpreted. The description of machine learning models (9) is technically incorrect, displaying unfamiliarity with the nature of models and algorithms. Poor data is not necessarily a driver of algorithmic bias, as Murgia suggests.

The book also lacks balance and a solid research grounding. There is a seeming intention to guide readers to specific, strong views, supported by cherry-picked research and stories that are not all suitably justified. This has the potential to be misleading. The positioning of this book in a small ecosystem of media-friendly personalities in AI leads to a myopic view of the industry and omits more robust research recent issues and developments in AI, such as dehumanisation, funding, technical development, lack of education around algorithms and risk and studies of weaknesses in AI implementation. The author admits to sourcing references by browsing papers from a few media-friendly AI personalities. This absence of a rigorous research methodology casts doubt on the credibility of the conclusions drawn from the case studies.

Disciplines such as philosophy, sociology and psychology are commented on, but without in-depth research and discussion on their relevance to AI, such as in the context of anthropomorphism, morality, human thought and decision making. Thus, the topic of algorithms “hiring and firing” workers lacks a deeper discussion around why this is different to a human performing the same action. The description of “data labelling facilities” (19) to refer to data warehouses of thousands of people sifting images for low pay is confusing to the reader, especially when these workers are referred to as “slaves” with little choice over their own exploitation (30). The wages are discussed as being low, but not contextualised. Murgia cites vast warehouses full of non-technical people classifying images to then be fed into an algorithm, a description which reveals the author’s lack of knowledge of the algorithmic design process. A possible reason for this apparent level of “data labelling” could be that we cannot represent human experience in an algorithm.

The author avoids a nuanced discussion of the simultaneous positive and negative aspects of technologies.

The author avoids a nuanced discussion of the simultaneous positive and negative aspects of technologies. In the chapter on health, the technology taking and using your x-ray data is acceptable (no mention of consent) but in the facial recognition case it is an invasion of privacy. Aside from informed consent, this ignores the key questions of motivation, purpose and ethics. The book overlooks both the potential nefarious uses of technology via optimism bias, ie uses within health that take data without consent or for profit and the positive uses of the technology used for deepfake pornography, which is used to make avatars and animated films. This latter issue around pornography is certainly a concerning, but Murgia refrains from presenting any of the remedies and current work in this area. There is a much deeper discussion to be had here. The issues are not always black and white; they are conceptually complex and require unpacking.

If Murgia had limited the book’s scope to case studies on the extent of the impact of technology and AI on marginalised and excluded groups [] or even on data transparency it would be far more coherent.

If Murgia had limited the book’s scope to case studies on the extent of the impact of technology and AI on marginalised and excluded groups – refugees, migrants, precarious workers, socioeconomic and racial minorities and women – or even on data transparency it would be far more coherent. As it is, the book is a long, meandering read that weaves through complex concepts and issues as if they are already understood by the reader. In order to position the book under the banner of AI, it tries to accomplish too much with too little rigorous, in-depth research, ultimately limiting its capacity to engage with pressing concerns posed by the rapid technological development of our times.

Note: This review gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science.

Image credit: whiteMocca on Shutterstock

 

Say Hello to this Philosopher’s ExTRA

Published by Anonymous (not verified) on Thu, 18/04/2024 - 5:34am in

Appropriately enough, Luciano Floridi (Yale), known for his work in the philosophy of information and technology, may be the first philosopher with a… well, what should we call this thing?

It’s an AI chatbot trained on his works that can then answer questions about what he says in them, but also can extrapolate somewhat to offer suggestions as to what he might think about topics not covered in those works.

“AI chatbot” doesn’t quite capture the connection it has to the person whose thoughts it is trained on, though. Its creator gave it the name “LuFlot.” But we need a name for the kind of thing LuFlot is, since surely there will end up being many more of them, used for more than just academic purposes.

My suggestion: “Extended Thought and Response Agent”, or “ExTRA” (henceforth, just “extra”).

Floridi’s extra was developed by Nicolas Gertler, a first-year student at Yale, and Rithvik “Ricky” Sabnekar, a high school student, “to foster engagement” with Floridi’s ideas, according to a press release:

Meant to facilitate teaching and learning, the chatbot is trained on all the books that Floridi has published over his more than 30-year academic career. Within seconds of receiving a query, it provides users detailed and easily digestible answers drawn from this vast work. It’s able to synthesize information from multiple sources, finding links between works that even Floridi might not have considered.

In part, it’s like a version of “Hey Sophi“, discussed here three years ago, except that it’s publicly accessible, and not just a personal research tool.

Gertler and Sabnekar founded Mylon Education, “a startup company seeking to transform the educational landscape by reconstructing the systems through which individuals generate and develop their ideas,” according to the press release. “LuFlot is the startup’s first project.”

You can try out Floridi’s extra here.

 

The post Say Hello to this Philosopher’s ExTRA first appeared on Daily Nous.

Using Generative AI to Teach Philosophy (w/ an interactive demo you can try) (guest post)

Published by Anonymous (not verified) on Fri, 23/02/2024 - 1:25am in

Philosophy teachers—Michael Rota, a professor of philosophy at the University of St. Thomas (Minnesota), is about to make your teaching a bit better and your life a bit easier.

Professor Rota recently began learning about how to use artificial intelligence tools to teach philosophy. In the following guest post, he not only shares some suggestions, but also let’s you try out two demos of his GPT-4-based interactive course tutor.

The course tutor is part of a program he is helping develop, and which should be available for other professors to use and customize sometime this summer.

Using Generative AI to Teach Philosophy
by Michael Rota

I have a friend who leads AI product strategy at a medium-sized tech company, and for about a year he’s been telling me about various impressive tasks one can accomplish with Large Language Models, like OpenAI’s GPT-4. In December I finally started listening, and began investigating how one might use AI tools as a teacher. (I’m a philosophy professor at the University of St. Thomas.) I’ve been amazed by the promise this new technology holds for instructors—in part because of the potential to increase productivity (of the teacher), but even more because of the potential to improve student learning.

In this post I’ll focus on the practical and discuss three free or low-cost tools that can be employed by a philosophy professor without any special technical expertise: (1) an interactive course tutor for your students, which you can load with your own questions and answers from your course, (2) a tool for quickly drafting a new exam, quiz, or HW assignment, and (3) a chatbot created from your own syllabus and lecture notes, so your students can query the content of your course.

The interactive course tutor

GPT-4 mimics human reasoning remarkably well (it scored in the 88th percentile on the LSAT). But it sometimes just makes stuff up. What if you could provide GPT-4 with good answers to questions you wanted your students to work through? It turns out you can, and thus it is possible to create an alarmingly capable course tutor by supplying GPT-4 with a series of question/answer pairs. This allows each student to have a one-on-one tutoring experience, and get immediate feedback as they work through an assignment.

You can play with a demo of this here.

Take the first assignment in the first module of this demo: “Think up a false conjunctive proposition.” This task has an infinite number of possible correct responses, yet GPT-4 can competently assess student answers,because the instructor-provided answer passed to GPT-4 by the program is general—it’s a recipe for correct answers, as it were. In this demo, the instructor-provided answer GPT-4 has been given is this:

A conjunctive proposition is any proposition of the form A and B, where A is a complete proposition and B is a complete proposition. A and B are called the ‘conjuncts’ of the conjunctive proposition. A conjunctive proposition is false if and only if A is false or B is false or both A and B are false. It counts as true otherwise.

That’s it. That’s enough for the AI tutor to respond accurately to almost any possible student response. A student can get the question wrong in a number of ways: for example, by entering a conjunctive proposition that’s true, or a proposition that’s not a conjunction, or something that’s not a proposition at all. GPT-4 handles all of these possibilities.

Using generative AI in this way offers several advantages over traditional homework assignments:

(a) students get immediate, specific feedback on each question
(b) students who need more practice can get it without having to make other students do busy work
(c) there’s less grading for teachers
(d) there is a decreased need for the teacher to explain the same thing multiple times.

How will grading work? In my view it’s too soon to hand grading over to AIs, so in my classes I plan to split the grading and the learning. The grading will be based on class participation and in-class, pen and paper exams. The learning will be facilitated in the standard ways but also with the help of an interactive course tutor based on questions and answers from my course.

Here is a second demo, allowing an instructor to test functionality by inputting a single question/answer pair and then checking how well the AI tutor handles mock student answers.

The demos linked above use an early version of the product I’m helping to design. It should be available by the summer, at which points professors will be able to create an account, input their own modules of question/answer pairs, and hit ‘submit’ to create a tutor based on their material, accessible for their students as a Web App.

For a larger discussion of the promise of interactive tutors in education, see this TED talk by Sal Khan of Khan Academy.

Assignment generation

The creation of new questions for homeworks, quizzes, and exams can be time-consuming, whether one is designing a new course or just creating a new version of an existing assignment or test. Large language models are great for speeding up this process.

If you go to chat.openai.com, you can sign up for a free account with OpenAI and use GPT 3.5 at no cost. That allows you to type into a textbox, entering a prompt like “Can you give me ten sample questions on ____, suitable for a college level” or “Here’s a question on this topic {insert a question from an old assignment}. Can you can give me a very similar question, but with different details?” Used in this way, GPT 3.5 can provide some value.

But GPT 4 is much better, both because it is better at mimicking human reasoning and because it allows you to attach files. So you can attach an old assignment and ask it for a very similar assignment in the same format. The downside here is that to use GPT-4 you need a GPT Plus account, which costs $20 a month. An upside is that additional functionality comes along with a GPT Plus account: you can access the GPT store. There you will find customized versions of GPT-4 like the “Practice Exam/Quiz/Test Creator for School” GPT, which allows you to upload course content (e.g. your lesson plans on a topic), and then ask for sample questions based on that material. With only a little more work, you can create your own GPT with access to knowledge about your own course, uploading up to 20 files, and use it to generate drafts of assignments tailored to your material.

As with any AI generated content, think of the process like this: with the right prompting from you, the AI produces initial drafts almost instantaneously, but you’ll need to evaluate and refine before you have a final product.

Course chatbot

Another thing you can do with GPT Plus is to create a course chatbot. If you create a GPT and upload files with information about the content in your course (your syllabus, lessons plans, handouts, etc.), then anyone with access to the chatbot can ask questions like “When is the final exam in this class?”, “How did we define a ‘level of confidence’?”, or “what are the steps in Aristotle’s function argument?”. And you can give others access to the chatbot by making it available to anyone with a link. However, your students would need a GPT-Plus account to use it, and that may not be feasible. But there is a free workaround: If you put your course content in a pdf that is no more than 120 pages (or break it up into several), you can give the pdf(s) to your students and direct them to the landing page of ChatPDF, where they can upload the pdf and then query it for free.

If you have further questions about any of this, raise them in the comments or email them to me.

 

The post Using Generative AI to Teach Philosophy (w/ an interactive demo you can try) (guest post) first appeared on Daily Nous.

The Quickest Revolution: An Insider’s Guide to Sweeping Technological Change, and Its Largest Threats – review

Published by Anonymous (not verified) on Thu, 25/01/2024 - 11:07pm in

In The Quickest RevolutionJacopo Pantaleoni examines modern technological progress and the history of computing. Bringing to bear his background as a visualisation software designer and a philosophical lens, Pantaleoni illuminates the threats that technological advancements like AI, the Metaverse, and Deepfakes pose to society, writes Hermano Luz Rodrigues.

The Quickest Revolution: An Insider’s Guide to Sweeping Technological Change, and Its Largest Threats. Jacopo Pantaleoni. Mimesis International. 2023.

Find this book: amazon-logo

The quickest revolution by Jacopo Pantaleoni showing a wave in the background“This changes everything” is perhaps the most hackneyed phrase found in YouTube videos when the topic happens to be new technologies. Such videos typically feature enthusiastic presenters describing the marvellous potentials of a soon-to-come technology, and a comment section that shares the same optimism. These videos proliferate daily, receiving hundreds of thousands of views. Regardless of whether we take them at face value or with extreme scepticism, their abundance illustrates the craze for technological progress, and more importantly, that a critical view of this attention is wanting.

Pantaleoni uses theories such as Moore’s Law, which explains an exponential growth phenomenon, and inputs from his career and personal experiences, to frame the history of, and the philosophical ideas driving, technological change.

In The Quickest Revolution, Jacopo Pantaleoni aims to fill this gap by supplying the reader with a critical, yet personal, analysis of modern technological progress and its impact on society. Coming from a background in computer science and visualisation software development, Pantaleoni uses theories such as Moore’s Law, which explains an exponential growth phenomenon, and inputs from his career and personal experiences, to frame the history of, and the philosophical ideas driving, technological change.

The first few chapters of the book are devoted to a survey of the defining moments of pre-modern scientific advancements in the Western world. The chapters include breakthroughs from historical figures such as Copernicus, Galileo and Bacon. The author then fast-forwards to the 20th century to briefly introduce the achievements of the godfathers of computer science like Alan Turing. The descriptions of these events foreshadow the book’s main focus on contemporary technological development and its concerns. In the latter, Pantaleoni approaches many tech-related keywords trending today from a philosophical perspective: AI, Metaverse, Deepfakes, and Simulation, among others.

what distinguishes Pantaleoni’s approach is the fact that he analyses these themes with a gaze that stems from the fields of realistic visualisation and simulation.

While such at-issue discourses on contemporary technology may be plentiful among enthusiasts (eg, podcasts like Lex Fridman), what distinguishes Pantaleoni’s approach is the fact that he analyses these themes with a gaze that stems from the fields of realistic visualisation and simulation. This distinction is not to be taken lightly. Throughout the book, there are surprising overlaps between these specific fields and society’s perception and interest in technology. For example, the author notes how films such as The Matrix, which used technology to simulate and depict “another reality that did look real”, offer proof of “how deeply computer graphics has been affecting our culture” (185). In fact, he argues that not only did sci-fi and CGI-laden media foment interest in stories about simulated worlds, but the technological achievements of such productions heavily contributed to society’s adoration and pursuit of advancements in realistic visualisations and simulations.

Pantaleoni acknowledges that society’s pursuit of a realistic-simulated future is replete with potential benefits, such as reduction of operation costs, accessibility through remote work, and engagement by telepresence. But, he notes that it may bring forth undesirable consequences

Pantaleoni acknowledges that society’s pursuit of a realistic-simulated future is replete with potential benefits, such as reduction of operation costs, accessibility through remote work, and engagement by telepresence. But, he notes that it may bring forth undesirable consequences to the physical world. For him, such aspirations implicitly denote a belief that “advances in photorealistic rendering, networking, and artificial intelligence will provide us the tools to build a better version of reality” (244). He cautions that this reality exodus neglects existing problems, and poses the question: “If we are failing to set things straight in the real world, what chances do we have to fair better, or ‘do it right’ in a hypothetical Metaverse?”(244).

The book makes the case that there are signs that the hitherto inexorable drive for progress in these technologies is leading to devastating effects. As practical examples, the author cites the impacts these technologies have had on political elections, the economy, and collective identity, among others. The book also underscores how physical and virtual/simulated have become increasingly intertwined through technology. Sherry Turkle observed this phenomenon many years prior in her presentation Artificial Intelligence at 50: “When Animal Kingdom opened in Orlando, populated by ‘real’, that is, biological animals, its first visitors complained that these animals were not as ‘realistic’ as the animatronics creatures in Disneyworld”. That is, while the animatronics featured “typical” characteristics, the real animals were perceived as static in comparison.

In a similar fashion, Pantaleoni recognises the capacity of contemporary technologies to shift perceptions and recoil in society as proxies. He writes that the overwhelming majority of Deepfakes, for example, either create pornographic or troubling scenes using celebrities. Furthermore, he notes that Artificial Intelligence (AI) chatbots are capable of impersonating a human being and that AI is automating both physical and mental human labour.

Whatever risks these new technologies seem to embody, however, are often brushed off by enthusiasts. This rather careless stance might be due to what Pantaleoni describes as a “blind” faith in technological progress, a belief akin to a “new and widely spread religion” (242). At its core, this techie religion is based on the imperative that technological growth is not to be questioned or impeded, for it makes “promises of a better reality” (243).

While previous technologies were essentially engineered by humans, society is transitioning towards new technologies that are increasingly autonomous and uncontrollable

Two arguments regarding the implications of this “religion” may be extracted from the book. The first argument is that for the zealots, it doesn’t matter how things progress (the means), as long as they continue to do so (produce results). While previous technologies were essentially engineered by humans, society is transitioning towards new technologies that are increasingly autonomous and uncontrollable, because these new technologies produce results that are “far much better than any handcrafted algorithm a human could make”(126).

Similar to the deceiving Mechanical Turk of the 18th century, many of today’s black-box technologies are very convincing in providing an illusion of their capabilities, while little is known about their under-the-hood properties or actual affordances.

The second argument is that what is perceived as progress may actually be a sort of artifice. Similar to the deceiving Mechanical Turk of the 18th century, many of today’s black-box technologies are very convincing in providing an illusion of their capabilities, while little is known about their under-the-hood properties or actual affordances. This concealment of properties and their seductive realism lure techno enthusiasts because of their desire to believe in them. Pantaleoni reminds us, however, that image-generative AI models, for instance, “know nothing about physics laws and accurate simulations” (141). Instead, it achieves extreme realism by feeding millions of training examples (141).

Throughout the book, Pantaleoni engages the reader in the challenges of technological development, through a distinct and compelling gaze – that of his specialisation in realistic visualisation software. Moreover, he does so in the tone of a passionate advocate of technology and a worried critic. There are a variety of contemporary “revolution” topics and discussions, such as the ethics behind the implementation of new technologies or its impact on the economy, and depending on each reader’s preferences and interests, some will resonate more than others. However, readers are likely to find the historical accounts narrated in the first few chapters disjointed from the book’s focus. These accounts are broad and familiar, with much of its content being assumed knowledge for most readers. Nevertheless, Pantaleoni offers notable contributions to the field with his shrewd observations anchored by his vast experience. In a field saturated with either theorists or quacks, it is especially commendable to read a book from the perspective of a practitioner.

This post gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science. The LSE RB blog may receive a small commission if you choose to make a purchase through the above Amazon affiliate link. This is entirely independent of the coverage of the book on LSE Review of Books.

Image Credit: Bruce Rolff on Shutterstock.

Wifi Dispenser

Published by Anonymous (not verified) on Fri, 19/01/2024 - 2:02am in

Tags 

Computers

 Wifi dispenser. Are YOUR hands 5 GHz compatible, or just 2.4?

More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech – review

Published by Anonymous (not verified) on Thu, 28/12/2023 - 9:00pm in

In More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, Meredith Broussard scrutinises bias encoded into a range of technologies and argues that their eradication should be prioritised as governments develop AI regulation policy. Broussard’s rigorous analysis spotlights the far-reaching impacts of invisible biases on citizens globally and offers practical policy measures to tackle the problem, writes Fabian Lütz.

More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech. Meredith Broussard. MIT Press. 2023. 

Find this book: amazon-logo

More than a glitch-coverAs the world witnesses advancements in the use of Artificial Intelligence (AI) and new technologies, governments around the world such as the UK and US the EU and international organisations are slowly starting to propose concrete measures, regulation and AI bodies to mitigate any potential negative effects of AI on humans. Against this background, More than a Glitch offers a timely and relevant contribution to the current AI regulatory debate. It provides a balanced look at biases and discriminatory outcomes of technologies, focusing on race, gender and ability bias, topics that tend to receive less attention in public policy discussions. The author’s academic and computer sciences background as well as her previous book Artificial Unintelligence – How Computers Misunderstand the World make her an ideal author to delve into this important societal topic. The book addresses algorithmic biases and algorithmic discrimination which not only receives increasing attention in academic circles but is of practical relevance due to its potential impacts on citizens and considering the choice of regulation in the coming months and years.

[More than a Glitch] provides a balanced look at biases and discriminatory outcomes of technologies, focusing on race, gender and ability bias, topics that tend to receive less attention in public policy discussions

The book’s cornerstone is that technology is not neutral, and therefore racism, sexism and ableism are not mere glitches, but are coded into AI systems.

Broussard argues that “social fairness and mathematical fairness are different. Computers can only calculate mathematical fairness” (2). This paves the way to understand that biases and discriminatory potential are encoded in algorithmic systems, notably by those who have the power to define the models, write the underlying code and decide which datasets to use. She argues that rather than just making technology companies more inclusive, the exclusion of some demographics in the conceptualisation and design of frameworks needs to stop. The main themes of the book, which spans eleven short chapters, are machine bias, facial recognition, fairness and justice systems, student grading by algorithms, ability bias, gender, racism, medical algorithms, the creation of public interest technology and options to “reboot” the system and society.

Biases and discriminatory potential are encoded in algorithmic systems, notably by those who have the power to define the models, write the underlying code and decide which datasets to use.

Two chapters stand out in Broussard’s attempt to make sense of the problems at hand: Chapter Two, “Understanding Machine Bias” and more specifically Chapter Seven “Gender Rights and Databases”. Both illustrate the author’s compelling storytelling skills and her ability to explain complex problems and decipher the key issues surrounding biases and discrimination.

Chapter Two describes one of the major applications of AI: machine learning which Broussard defines as to take

“..a bunch of historical data and instruct a computer to make a model. The model is a mathematical construct that allows us to predict patterns in the data based on what already exists. Because the model describes the mathematical patterns in the data, patterns that humans can’t easily see, you can use that model to predict or recommend something similar” (12).

The author distinguishes between different forms of training a model and discusses the so called “black box problem” – the fact that AI systems are very often opaque – and explainability of machine decisions. Starting from discriminatory treatment of bank loan applications, for example credit score assessment on the basis of length of employment, income or debt, the author explains with illustrative graphs how algorithms find correlations in datasets which could lead to certain discriminatory outcomes. She explains that contrary to humans, machines have the capacity to analyse huge amounts of datasets with data points which enable for example banks to make predictions on the probability of loan repayment. The mathematics underlying such predictions are based on what similar groups of people with similar variables have done in the past. The complex process often hides underlying biases and potential for discriminations. As Broussard points out,

“Black applicants are turned away more frequently than white applicants [and] are offered mortgages at higher rates than white counterparts with the same data […]” (25).

The book also demonstrates convincingly that the owners or designers of the model wield a powerful tool to shape decisions for society. Broussard sums up the chapter and provides crucial advice for AI developers when she states, advice for AI developers when she states,

“If training data is produced out of a system of inequality, don’t use it to build models that make important social decisions unless you ensure the model doesn’t perpetuate inequality” (28).

Chapter Seven looks at how databases impact gender rights, starting with the example of gender transition which is registered in Official Registers. This example illustrates the limitations of algorithmic systems as compared to humans, not only in light of the traditional binary system for assigning gender as male and female, but more generally the binary system that lies at the heart of computing. Both in the gender binary and computer binary framework, choices need to be made between one or the other leaving no flexibility. Broussard describes the binary system as follows:

“Computers are powered by electricity, and the way they work is that there is a transistor, a kind of gate, through which electricity flows. If the gate is closed, electricity flows through, and that is represented by a 1. If the gate is open, there is no electricity, and that is represented by a 0” (107).

When programmers design an algorithm, they “superimpose human social values onto a mathematical system.” Broussard urges us to ask ourselves, “Whose values are encoded in the system?” (109).

The resulting choices that need to be made within AI systems or forms used in administration often do not adequately represent reality. For people who do not feel represented by the options of male and female, such as gender non-conforming people, they are asked to make the choice in which category they fall even though this would not reflect their gender identity. Here again, Broussard reminds us of the importance of design choices and assumptions of coders which impact people’s everyday life. When programmers design an algorithm, they “superimpose human social values onto a mathematical system.” Broussard urges us to ask ourselves, “Whose values are encoded in the system?” (109). The chapter concludes with the challenge of making “technological systems more inclusive” (116) and argues that computers constitute not only mathematical but sociotechnical systems that need to be updated regularly in order to reflect societal change.

Computers constitute not only mathematical but sociotechnical systems that need to be updated regularly in order to reflect societal change.

The book successfully describes the invisible dangers and impacts of these rapidly advancing technologies in terms of race, gender and ability bias, making these ideas accessible through concrete examples. Ability bias is discussed in Chapter Seven, “Ability and Technology”, where she gives several examples, how technology companies try to provide technology to serve the disabled community in their daily jobs or lives. She gives the example of Apple shops where either sign language interpreters are available or where Apple equips employees with an iPad to communicate with customers. For consumers, she also highlights Voiceover screen reader software, auto-captioning and transcripts of audio or read-aloud functions of newspaper sites. Broussard points both to the advantages and the limitations of those technological solutions.

She also introduces the idea of tackling biases and discrimination with the help of audit systems

Readers are invited to reflect on concrete policy proposals and suggestions, on the basis of some ideas sketched out in last chapter, “Potential Reboot” where she shows her enthusiasm for the EU’s proposed AI Act and the US Algorithmic Accountability Act. She also introduces the idea of tackling biases and discrimination with the help of audit systems and presents a project for one such system based on the regulatory sandbox idea, which is a “safe space for testing algorithms or policies before unleashing them on the world” (175). The reader might wish that Broussard‘s knowledge of technology and awareness of discrimination issues could have informed the ongoing policy debate even further.

In sum, the book will be of interest and use to a wide range of readers, from students, specialised academics, policy makers and AI experts to those new to the field who want to learn more about the impacts of AI on society.

This post gives the views of the author, and not the position of the LSE Review of Books blog, or of the London School of Economics and Political Science. The LSE RB blog may receive a small commission if you choose to make a purchase through the above Amazon affiliate link. This is entirely independent of the coverage of the book on LSE Review of Books.

Image Credit: Vintage Tone on Shutterstock.