Information Technology

Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/common.inc).

What OpenAI shares with Scientology

Published by Anonymous (not verified) on Wed, 22/11/2023 - 1:46am in

When Sam Altman was ousted as CEO of OpenAI, some hinted that lurid depravities lay behind his downfall. Surely, OpenAI’s board wouldn’t have toppled him if there weren’t some sordid story about to hit the headlines? But the reporting all seems to be saying that it was God, not Sex, that lay behind Altman’s downfall. And Money, that third great driver of human behavior, seems to have driven his attempted return and his new job at Microsoft, which is OpenAI’s biggest investor by far.

As the NYT describes the people who pushed Altman out:

Ms. McCauley and Ms. Toner [HF – two board members] have ties to the Rationalist and Effective Altruist movements, a community that is deeply concerned that A.I. could one day destroy humanity. Today’s A.I. technology cannot destroy humanity. But this community believes that as the technology grows increasingly powerful, these dangers will arise.

McCauley and Toner reportedly worried that Altman was pushing too hard, too quickly for new and potentially dangerous forms of AI (similar fears led some OpenAI people to bail out and found a competitor, Anthropic, a couple of years ago). The FT’s reporting confirms that the fight was over how quickly to commercialize AI

The back-story to all of this is actually much weirder than the average sex scandal. The field of AI (in particular, its debates around Large Language Models (LLMs) like OpenAI’s GPT-4) is profoundly shaped by cultish debates among people with some very strange beliefs.

As LLMs have become increasingly powerful, theological arguments have begun to mix it up with the profit motive. That explains why OpenAI has such an unusual corporate form – it is a non-profit, with a for-profit structure retrofitted on top, sweatily entangled with a profit-maximizing corporation (Microsoft). It also plausibly explains why these tensions have exploded into the open.

I joked on Bluesky that the OpenAI saga was as if “the 1990s browser wars were being waged by rival factions of Dianetics striving to control the future.” Dianetics – for those who don’t obsess on the underbelly of American intellectual history – was the 1.0 version of L. Ron Hubbard’s Scientology. Hubbard hatched it in collaboration with the science fiction editor John W. Campbell (who had a major science fiction award named after him until 2019, when his racism finally caught up with his reputation).

The AI safety debate too is an unintended consequence of genre fiction. In 1987, multiple-Hugo award winning science-fiction critic Dave Langford began a discussion of the “newish” genre of cyberpunk with a complaint about an older genre of story on information technology, in which “the ultimate computer is turned on and asked the ultimate question, and replies `Yes, now there is a God!’

However, the cliche didn’t go away. Instead, it cross-bred with cyberpunk to produce some quite surprising progeny. The midwife was the writer Vernor Vinge, who proposed a revised meaning for “singularity.” This was a term already familiar to science fiction readers as the place inside a black hole where the ordinary predictions of physics broke down. Vinge suggested that we would soon likely create true AI, which would be far better at thinking than baseline humans, and would change the world in an accelerating process, creating a historical singularity, after which the future of the human species would be radically unpredictable.

These ideas were turned into novels by Vinge himself, including A Fire Upon the Deep (fun!) and Rainbow’s End (weak!). Other SF writers like Charles Stross wrote novels about humans doing their best to co-exist with “weakly godlike” machine intelligence (also fun!). Others who had no notable talent for writing, like the futurist Ray Kurzweil, tried to turn the Singularity into the foundation stone of a new account of human progress. I still possess a mostly-unread copy of Kurzweil’s mostly-unreadable magnum opus, The Singularity is Near, which was distributed en masse to bloggers like meself in an early 2000s marketing campaign. If I dug hard enough in my archives, I might even be able to find the message from a publicity flack expressing disappointment that I hadn’t written about the book after they sent it. All this speculation had a strong flavor of end-of-days. As the Scots science fiction writer, Ken MacLeod memorably put it, the Singularity was the “Rapture of the Nerds.” Ken, being the offspring of a Free Presbyterian preacher, knows a millenarian religion when he sees it: Kurzweil’s doorstopper should really have been titled The Singularity is Nigh.

Science fiction was the gateway drug, but it can’t really be blamed for everything that happened later. Faith in the Singularity has roughly the same relationship to SF as UFO-cultism. A small minority of SF writers are true believers; most are hearty skeptics, but recognize that superhuman machine intelligences are (a) possible) and (b) an extremely handy engine of plot. But the combination of cultish Singularity beliefs and science fiction has influenced a lot of external readers, who don’t distinguish sharply between the religious and fictive elements, but mix and meld them to come up with strange new hybrids.

Just such a syncretic religion provides the final part of the back-story to the OpenAI crisis. In the 2010s, ideas about the Singularity cross-fertilized with notions about Bayesian reasoning and some really terrible fanfic to create the online “rationalist” movement mentioned in the NYT.

I’ve never read a text on rationalism, whether by true believers, by hangers-on, or by bitter enemies (often erstwhile true believers), that really gets the totality of what you see if you dive into its core texts and apocrypha. And I won’t even try to provide one here. It is some Very Weird Shit and there is really great religious sociology to be written about it. The fights around Roko’s Basilisk are perhaps the best known example of rationalism in action outside the community, and give you some flavor of the style of debate. But the very short version is that Eliezer Yudkowsky, and his multitudes of online fans embarked on a massive collective intellectual project, which can reasonably be described as resurrecting David Langford’s hoary 1980s SF cliche, and treating it as the most urgent dilemma facing human beings today. We are about to create God. What comes next? Add Bayes’ Theorem to Vinge’s core ideas, sez rationalism, and you’ll likely find the answer.

The consequences are what you might expect when a crowd of bright but rather naive (and occasionally creepy) computer science and adjacent people try to re-invent theology from first principles, to model what human-created gods might do, and how they ought be constrained. They include the following, non-comprehensive list: all sorts of strange mental exercises, postulated superhuman entities benign and malign and how to think about them; the jumbling of parts from fan-fiction, computer science, home-brewed philosophy and ARGs to create grotesque and interesting intellectual chimeras; Nick Bostrom, and a crew of very well funded philosophers; Effective Altruism, whose fancier adherents often prefer not to acknowledge the approach’s somewhat disreputable origins.

All this would be sociologically fascinating, but of little real world consequence, if it hadn’t profoundly influenced the founders of the organizations pushing AI forward. These luminaries think about the technologies that they were creating in terms that they have borrowed wholesale from the Yudkowsky extended universe. The risks and rewards of AI are seen as largely commensurate with the risks and rewards of creating superhuman intelligences, modeling how they might behave, and ensuring that we end up in a Good Singularity, where AIs do not destroy or enslave humanity as a species, rather than a bad one.

Even if rationalism’s answers are uncompelling, it asks interesting questions that might have real human importance. However, it is at best unclear that theoretical debates about immantenizing the eschaton tell us very much about actually-existing “AI,” a family of important and sometimes very powerful statistical techniques, which are being applied today, with emphatically non-theoretical risks and benefits.

Ah, well, nevertheless. The rationalist agenda has demonstrably shaped the questions around which the big AI ‘debates’ regularly revolve, as demonstrated by the Rishi Sunak/Sam Altman/Elon Musk love-fest “AI Summit” in London a few weeks ago.

We are on a very strange timeline. My laboured Dianetics/Scientology joke can be turned into an interesting hypothetical. It actually turns out (I only stumbled across this recently) that Claude Shannon, the creator of information theory (and, by extension, the computer revolution) was an L. Ron Hubbard fan in later life. In our continuum, this didn’t affect his theories: he had already done his major work. Imagine, however, a parallel universe, where Shannon’s science and standom had become intertwined and wildly influential, so that debates in information science obsessed over whether we could eliminate the noise of our engrams, and isolate the signal of our True Selves, allowing us all to become Operating Thetans. Then reflect on how your imagination doesn’t have to work nearly as hard as it ought to. A similarly noxious blend of garbage ideas and actual science is the foundation stone of the Grand AI Risk Debates that are happening today.

To be clear – not everyone working on existential AI risk (or ‘x risk’ as it is usually summarized) is a true believer in Strong Eliezer Rationalism. Most, very probably, are not. But you don’t need all that many true believers to keep the machine running. At least, that is how I interpret this Shazeda Ahmed essay, which describes how some core precepts of a very strange set of beliefs have become normalized as the background assumptions for thinking about the promise and problems of AI. Even if you, as an AI risk person, don’t buy the full intellectual package, you find yourself looking for work in a field where the funding, the incentives, and the organizational structures mostly point in a single direction (NB – this is my jaundiced interpretation, not hers).

There are two crucial differences between today’s AI cult and golden age Scientology. The first was already mentioned in passing. Machine learning works, and has some very important real life uses. E-meters don’t work and are useless for any purpose other than fleecing punters.

The second (which is closely related) is that Scientology’s ideology and money-hustle reinforce each other. The more that you buy into stories about the evils of mainstream psychology, the baggage of engrams that is preventing you from reaching your true potential and so on and so on, the more you want to spend on Scientology counselling. In AI, in contrast, God and Money have a rather more tentative relationship. If you are profoundly worried about the risks of AI, should you be unleashing it on the world for profit? That tension helps explain the fight that has just broken out into the open.

It’s easy to forget that OpenAI was founded as an explicitly non-commercial entity, the better to balance the rewards and the risks of these new technologies. To quote from its initial manifesto:

It’s hard to fathom how much human-level AI could benefit society, and it’s equally hard to imagine how much it could damage society if built or used incorrectly. Because of AI’s surprising history, it’s hard to predict when human-level AI might come within reach. When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.

We’re hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

That … isn’t quite how it worked out. The Sam Altman justification for deviation from this vision, laid out in various interviews, is that it turned out to just be too damned expensive to train the models as they grew bigger, and bigger and bigger. This necessitated the creation of an add-on structure, which would sidle into profitable activity. It also required massive cash infusions from Microsoft (reportedly in the range of $13 billion), which also has an exclusive license to OpenAI’s most recent LLM, GPT-4. Microsoft, it should be noted, is not in the business of prioritizing “a good outcome for all over its own self-interest.” It looks instead, to invest its resources along the very best Friedmanite principles, so as to create whopping returns for shareholders. And $13 billion is a lot of invested resources.

This, very plausibly explains the current crisis. OpenAI’s governance arrangements are shaped by the fact that it was a non-profit until relatively recently. The board is a non-profit board. The two members already mentioned, McCauley and Toner, are not the kind of people you would expect to see making the big decisions for a major commercial entity. They plausibly represent the older rationalist vision of what OpenAI was supposed to do, and the risks that it was supposed to avert.

But as OpenAI’s ambitions have grown, that vision has been watered down in favor of making money. I’ve heard that there were a lot of people in the AI community who were really unhappy with OpenAI’s initial decision to let GPT rip. That spurred the race for commercial domination of AI which has shaped pretty well everything that has happened since, leading to model after model being launched, and to hell with the consequences. People like Altman still talk about the dangers of AGI. But their organizations and businesses keep releasing more, and more powerful systems, which can be, and are being, used in all sorts of unanticipated ways, for good and for ill.

It would perhaps be too cynical to say that AGI existential risk rhetoric has become a cynical hustle, intended to redirect the attentions of regulators toward possibly imaginary future risks in the future, and away from problematic but profitable activities that are happening right now. Human beings have an enormous capacity to fervently believe in things that it is in their self-interest to believe, and to update those beliefs as the interests change or become clearer. I wouldn’t be surprised at all if Altman sincerely thinks that he is still acting for the good of humankind (there are certainly enough people assuring him that he is). But it isn’t surprising either that the true believers are revolting, as Altman stretches their ideology ever further and thinner to facilitate raking in the benjamins.

The OpenAI saga is a fight between God and Money; between a quite peculiar quasi-religious movement, and a quite ordinary desire to make cold hard cash. You should probably be putting your bets on Money prevailing in whatever strange arrangement of forces is happening as Altman is beamed up into the Microsoft mothership. But we might not be all that better off in this particular case if the forces of God were to prevail, and the rationalists who toppled Altman were to win a surprising victory. They want to slow down AI, which is good, but for all sorts of weird reasons, which are unlikely to provide good solutions for the actual problems that AI generates. The important questions about AI are the ones that neither God or Mammon has particularly good answers for – but that’s a topic for future posts.

Never Mind the Privacy: The Great Web 2.0 Swindle

Published by Matthew Davidson on Wed, 01/03/2017 - 1:43pm in

The sermon today comes from this six minute video from comedian Adam Conover: The Terrifying Cost of "Free” Websites

I don't go along with the implication here that the only conceivable reason to run a website is to directly make money by doing so, and that therefore it is our expectation of zero cost web services that is the fundamental problem. But from a technical point of view the sketch's analogy holds up pretty well. Data-mining commercially useful information about users is the business model of Software as a Service (SaaS) — or Service as a Software Substitute (SaaSS) as it's alternately known.

You as the user of these services — for example social networking services such as Facebook or Twitter, content delivery services such as YouTube or Flickr, and so on — provide the "content", and the service provider provides data storage and processing functionality. There are two problems with this arrangement:

  1. You are effectively doing your computing using a computer and software you don't control, and whose workings are completely opaque to you.
  2. As is anybody who wants to access anything you make available using those services.

Even people who don't have user accounts with these services can be tracked, because they can be identified via browser fingerprinting, and you can be tracked as you browse beyond the tracking organisation's website. Third party JavaScript "widgets" embedded in many, if not most, websites silently deliver executable code to users' browsers, allowing them to be tracked as they go from site to site. Common examples of such widgets include syndicated advertising, like buttons, social login services (eg. Facebook login), and comment hosting services. Less transparent are third-party services marketed to the site owner, such as Web analytics. These provide data on a site's users in the form of graphs and charts so beloved by middle management, with the service provider of course hanging on to a copy of all the data for their own purposes. My university invites no less than three organisations to surveil its students in this way (New Relic, Crazy Egg, and of course Google Analytics). Thanks to Edward Snowden, we know that government intelligence agencies are secondary beneficiaries of this data collection in the case of companies such as Google, Facebook, Apple, and Microsoft. For companies not named in these leaks, all we can say is we do not — because as users we cannot — know if they are passing on information about us as well. To understand how things might be different, one must look at the original vision for the Internet and the World Wide Web.

The Web was a victim of its own early success. The Internet was designed to be "peer-to-peer", with every connected computer considered equal, and the network which connected them completely oblivious to the nature of the data it was handling. You requested data from somebody else on the network, and your computer then manipulated and transformed that data in useful ways. It was a "World of Ends"; the network was dumb, and the machines at each end of a data transfer were smart. Unfortunately the Web took off when easy to use Web browsers were available, but before easy to use Web servers were available. Moreover, Web browsers were initially intended to be tools to both read and write Web documents, but the second goal soon fell away. You could easily consume data from elsewhere, but not easily produce and make it available yourself.

The Web soon succumbed to the client-server model, familiar from corporate computer networks — the bread and butter of tech firms like IBM and Microsoft. Servers occupy a privileged position in this model. The value is assumed to be at the centre of the network, while at the ends are mere consumers. This translates into social and economic privilege for the operators of servers, and a role for users shaped by the requirements of service providers. This was, breathless media commentary aside, the substance of the "Web 2.0" transformation.

Consider how the ideal Facebook user engages with their Facebook friends. They share an amusing video clip. They upload photos of themselves and others, while in the process providing the machine learning algorithm of Facebook's facial recognition surveillance system with useful feedback. They talk about where they've been and what they've bought. They like and they LOL. What do you do with a news story that provokes outrage, say the construction of a new concentration camp for refugees from the endless war on terror? Do you click the like button? The system is optimised, on the users' side, for face-work, and de-optimised for intellectual or political substance. On the provider's side it is optimised for exposing social relationships and consumer preferences; anything else is noise to be minimised.

In 2014 there was a minor scandal when it was revealed that Facebook allowed a team of researchers to tamper with Facebook's news feed algorithm in order to measure the effects of different kinds of news stories on users' subsequent posts. The scandal missed the big story: Facebook has a news feed algorithm.  Friending somebody on Facebook doesn't mean you will see everything they post in your news feed, only those posts that Facebook's algorithm selects for you, along with posts that you never asked to see. Facebook, in its regular day-to-day operation, is one vast, ongoing, uncontrolled experiment in behaviour modification. Did Facebook swing the 2016 US election for Trump? Possibly, but that wasn't their intention. The fracturing of Facebook's user base into insular cantons of groupthink, increasingly divorced from reality, is a predictable side-effect of a system which regulates user interactions based on tribal affiliations and shared consumer tastes, while marginalising information which might threaten users' ontological security.

Resistance to centralised, unaccountable, proprietary, user-subjugating systems can be fought on two fronts: minimising current harms; and migrating back to an environment where the intelligence of the network is at the ends, under the user's control. You can opt out of pervasive surveillance with browser add-ons like the Electronic Frontier Foundation's Privacy Badger. You can run your own instances of software which provide federated, decentralised services equivalent to the problematic ones, such as:

  • GNU Social is a social networking service similar to Twitter (but with more features). I run my own instance and use it every day to keep in touch with people who also run their own, or have accounts on an instance run by people they trust.
  • Diaspora is another distributed social networking platform more similar to Facebook.
  • OpenID is a standard for distributed authentication, replacing social login services from Facebook, Google, et al.
  • Piwik is a replacement for systems like Google Analytics. You can use it to gather statistics on the use of your own website(s), but it grants nobody the privacy-infringing capability to follow users as they browse around a large number of sites.

The fatal flaw in such software is that few people have the technical ability to set up a web server and install it. That problem is the motivation behind the FreedomBox project. Here's a two and a half minute news story on the launch of the project: Eben Moglen discusses the freedom box on CBS news

I also recommend this half-hour interview, pre-dating the Snowden leaks by a year, which covers much of the above with more conviction and panache than I can manage: Eben Moglen on Facebook, Google and Government Surveillance

Arguably the stakes are currently as high in many countries in the West as they were in the Arab Spring. Snowden has shown that for governments of the Five Eyes intelligence alliance there's no longer a requirement for painstaking spying and infiltration of activist groups in order to identify your key political opponents; it's just a database query. One can without too much difficulty imagine a Western despot taking to Twitter to blurt something like the following:

"Protesters love me. Some, unfortunately, are causing problems. Huge problems. Bad. :("

"Some leaders have used tough measures in the past. To keep our country safe, I'm willing to do much worse."

"We have some beautiful people looking into it. We're looking into a lot of things."

"Our country will be so safe, you won't believe it. ;)"

The Politics of Technology

Published by Matthew Davidson on Fri, 24/02/2017 - 4:03pm in

"Technology is anything that doesn't quite work yet." - Danny Hillis, in a frustratingly difficult to source quote. I first heard it from Douglas Adams.

Here is, at minimum, who and what you need to know:

Organisations

Sites

  • Boing Boing — A blog/zine that posts a lot about technology and society, as well as - distressingly - advertorials aimed at Bay Area hipsters.

People

Reading

Viewing

[I'm aware of the hypocrisy in recommending videos of talks about freedom, privacy and security that are hosted on YouTube.]

 

 

Tuesday, 1 November 2016 - 1:12pm

Published by Matthew Davidson on Tue, 01/11/2016 - 2:00pm in

COFFS Harbour company Janison has today launched a cloud-based enterprise learning solution, developed over several years working with organisations such as Westpac and Rio Tinto.

Really? In 2016 businesses are supposed to believe that a corporate MOOC (Massively Open Online Course; a misnomer from day one) will do for them what MOOC's didn't do for higher education? There are two issues here: quality and dependability.

In 2012, the "year of the MOOC", the ed-tech world was full of breathless excitement over a vision of higher education consisting of a handful of "superprofessors" recording lectures that would be seen by millions of students, with the rest of the functions of the university automated away. There was just one snag, noticed by MOOC pioneer, superprofessor, and founder of Udacity Sebastian Thrun. "We were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don't educate people as others wished, or as I wished. We have a lousy product," he said. That is not to say that there isn't a market for lousy products. As the president of San Jose State University cheerfully admitted of their own MOOC program, "It could not be worse than what we do face to face." It's not hard to imagine a certain class of institution happy to rip off their students by outsourcing their instruction to a tech firm, but harder to see why a business would want to rip themselves off on an inferior mode of training. Technology-intensive modes of learning work best among tech-savvy, self-modivated learners, so-called "roaming autodidacts". Ask yourself how many of your employees fit into that category; they are a very small minority among the general population.

The other problem is gambling on a product that depends on multiple platforms which reside in the hands of multiple vendors, completely beyond your own control. The longevity of these vendors is not guaranteed, and application development platforms are discontinued on a regular basis. Sticking with large, successful, reputable vendors is no guarantee; Google, for instance, is notorious for euthanising their "Software-as-a-Service" (SaaS) offerings on a regular basis, regardless of the fanfare with which they were launched. You may be willing to trade quality for affordability in the short term, but future migration costs are a matter of "when", not "if".