papers

Error message

  • Deprecated function: The each() function is deprecated. This message will be suppressed on further calls in _menu_load_objects() (line 579 of /var/www/drupal-7.x/includes/menu.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Notice: Trying to access array offset on value of type int in element_children() (line 6600 of /var/www/drupal-7.x/includes/common.inc).
  • Deprecated function: implode(): Passing glue string after array is deprecated. Swap the parameters in drupal_get_feeds() (line 394 of /var/www/drupal-7.x/includes/common.inc).

Founders’ influence on their new online communities

Published by Anonymous (not verified) on Fri, 03/05/2024 - 1:30am in

Tags 

papers

Hundreds of new subreddits are created every day, but most of them go nowhere, and never receive more than a few posts or comments. On the other hand, some become wildly popular. If we want to figure out what helps some things to get attention, then looking at new and small online communities is a great place to start. Indeed, the whole focus of my dissertation was trying to understand who started new communities, and why. So, I was super excited when Sanjay Kairam at Reddit told me that Reddit was interested in studying founders of new subreddits!

The research that Sanjay and I (but mostly Sanjay!) did was accepted at CHI 2024, a leading conference for human-computer interaction research. The goal of the research is to understand 1) founders’ motivations for starting new subreddits, 2) founders’ goals for their communities, 3) founders’ plans for making their community successful, and 4) how all of these relate to what happens to a community in the first month of its existence. To figure this out, we surveyed nearly 1,000 redditors one week after they created a new subreddit.

Lots of Motivations and Goals

So, what did we learn? First, that founders have diverse motivations, but the most common is interest in the topic. As shown in the figure above, most founders reported being motivated by topic engagement, information exchange, and connecting with others, while self-promotion was much more rare.

When we asked about their goals for the community, founders were split, and each of the options we gave was ranked as a top goal by a good chunk of participants. While there is some nuance between the different versions of success, we grouped them into “quantity-oriented” and “quality-oriented”, and looked at how motivations related to goals. Somewhat unsurprisingly, folks interested in self-promotion had quantity-oriented goals, while those interested in exchanging information were more focused on quality.

Diversity in plans

We then asked founders about what plans they had for building their community, based on recommendations from the online community literature, such as raising awareness, welcoming newcomers, encouraging contributions, and regulating bad behavior. Surprisingly, for each activity, about half of people said they planned to engage in doing that thing.

Early Community Outcomes

So, how do these motivations, goals, and plans relate to community outcomes? We looked at the first 28 days of each founded subreddit, and counted the number of visitors, number of contributors, and number of subscribers. We then ran regression analyses analyzing how well each aspect of motivations, goals, and plans predicted each outcome. High-level results and regression tables are shown below. For each row, when β is positive, that means that the given feature has a positive relationship with the given outcome. The exponentiated rate ratio (RR) column provides a point estimate of the effect size. For example, Self-Promotion has an RR of 1.32, meaning that if a given person’s self-promotion motivation was one unit higher the model predicts that their community would receive 32% more visitors.

A number of motivations predicted each of the outcomes we measured. The only consistently positive predictor was topical interest. Those who started a community because of interest in a topic had more visitors, more contributors, and more subscribers than others. Interestingly, those motivated by self-promotion had more visitors, but fewer contributors and subscribers.

Goals had a less pronounced relationship with outcomes. Those with quality-oriented goals had more contributors but fewer visitors than those with quantity-oriented goals. There was no significant difference in subscribers for founders with different types of goals.

Finally, raising awareness was the strategy most associated with our success metrics, predicting all three of them. Surprisingly, encouraging contributions was associated with more contributors, but fewer visitors. While we don’t know the mechanism for sure, asking for contributions seems to provide a barrier that discourages newcomers from taking interest in a community.

So what?

We think that there are some key takeaways for platform designers and those starting new communities. Sanjay outlined many of them on the Reddit engineering blog, but I’ll recap a few.

First, topical knowledge and passion is important. This isn’t a causal study, so we don’t know the mechanisms for sure, but people who are passionate about a topic may be aware of other communities in the space and are able to find the right niche; they are also probably better at writing the kinds of welcome messages, initial posts, etc. that appeal to people interested in the topic.

Second, our work is yet more evidence that communities require different things at different points in their lifecycle. Founders should probably focus on building awareness at first, and worry less about encouraging contributions or regulating behavior.

Finally, we think there are a lot of opportunities for designers to take diverse motivations and goals seriously. This could include matching people by their motivations for using a community, developing dashboards that capture different aspects of success and community health and quality, etc.

Learn More

If you want to learn more about the paper, you have options!

Sources of Underproduction in Open Source Software

Published by Anonymous (not verified) on Tue, 30/01/2024 - 8:00am in

Tags 

papers

Although the world relies on free/libre open source software (FLOSS) for essential digital infrastructure such as the web and cloud, the software that supports that infrastructure are not always as high quality as we might hope, given our level of reliance on them. How can we find this misalignment of quality and importance (or underproduction) before it causes major failures?

How can we find misalignment of quality and importance (underproduction) before it causes major failures?

In previous work, we found that underproduction is widespread in packages maintained by the Debian community, and when we shared this work in the Debian and FLOSS community, developers suggested that the age and language of the packages might be a factor, and tech managers suggested looking at the teams doing the maintenance work. Software engineering literature had found some support for these suspicions as well, and we embarked on a study to dig deeper into some of the factors associated with underproduction.

Our study was able to partially confirm this perspective using the underproduction analysis dataset from our previous study: software risk due to underproduction increases with age of both the package and its language, although many older packages and those written in older languages are and continue to be very well-maintained.

In this plot, dots represent software packages and their age, with higher underproduction factor indicating higher risk. The blue line is a smoothed average: note that we see an increase over time initially, but the trend flattens out for older packages.

This plot shows the spread of the data across the range of underproduction factor, grouped by language, where higher values are indications of higher risk. Languages are sorted from oldest on the left (Lisp) to youngest on the right (Java). Although newer languages overall are associated with lower risk, we see a great deal of variation.

However, we found the resource question more complex: additional contributors were associated with higher risk instead of decreasing it as we hypothesized. We also found that underproduction is associated with higher eigenvector centrality in the network formed if we take packages as nodes and edges by having shared maintainers; that is, underproduced packages were likely to be maintained by the same people maintaining other parts of Debian, and not isolated efforts. This suggests that these high-risk packages are drawing from the same resource pool as those which are performing well. A lack of turnover in maintainership and being maintained by a team were not statistically significant once we included maintainer network structure and age in our model.

How should software communities respond? Underproduction appears in part to be associated with age, meaning that all communities sooner or later may need to confront it, and new projects should be thoughtful about using older languages. Distributions and upstream project developers are all part of the supply chain and have a role to play in the work of preventing and countering underproduction. Our findings about resources and organizational structure suggest that “more eyeballs” alone are not the answer: supporting key resources may be of particular value as a means to counter underproduction.

This paper will be presented as part of the International Conference on Software Analysis, Evolution and Reengineering (SANER) 2024 in Rovaniemi, Finland. Preprint available HERE; code and data released HERE.

This work would not have been possible without the generosity of the Debian community. We are indebted to these volunteers who, in addition to producing Free/Libre Open Source Software software, have also made their records available to the public. We also gratefully acknowledge support from the Sloan Foundation through the Ford/Sloan Digital Infrastructure Initiative, Sloan Award 2018-11356 as well as the National Science Foundation (Grant IIS-2045055). This work was conducted using the Hyak supercomputer at the University of Washington as well as research computing resources at Northwestern University.

FLOSS project risk and community formality

Published by Anonymous (not verified) on Thu, 25/01/2024 - 1:34am in

Tags 

papers

What structure and rules are best for communities producing high-quality free/libre and open source software (FLOSS)? The stakes are high: cybersecurity researchers are raising the alarm about cybersecurity risk due to undermaintained components in the global software supply chain—much of which is FLOSS. In work that’s just been accepted to the IEEE International Conference on Software Analysis, Evolution and Reengineering (‘SANER’), we studied 182 Python-language packages in the GNU/Linux Debian distribution, examining the relationship between their levels of engineering formality and software risk. We found that more formal developer organization is associated with higher levels of software risk, and more widely spread developer responsibility is associated with lower levels of software risk.

We studied software risk through the underproduction metric initially developed by Champion and Hill (2021). Underproduction is a measurement of misalignment between the usage demands of a software project and the contributions of the project’s developer community. As such, underproduction measures the risk that software will be undermaintained, possibly including a security bug.

Our work examines the relationship between risk due to underproduction and governance formality. We employed measures initially developed by Tamburri et al. (2013) and later re-implemented in Tamburri et al. (2019). These metrics use multiple measures of software project formality — such as the average contributor type, usage of GitHub milestones, and age — to evaluate how formally structured a given project is.

Plot of the relationship between mean underproduction factor and mean membership type (MMT), a metric encapsulating the diffusion of merge responsibility across a project’s developer community.



We used linear regression to conclude that more formal project structures are associated with higher levels of underproduction and thus, increased project risk. We also found that the share of community-members who have merged code into the main development branch is also related to underproduction, with lower levels of underproduction correlated with larger shares of community mergers.

Evaluated together, these two conclusions suggest that operating less formally and sharing power more equally is associated with lower underproduction risk. The development of FLOSS project engineering is a process laden with tradeoffs, we hope that our conclusions can help better inform community decision making and organization.

For more details, visualizations, statistics, and more, we hope you’ll take a look at our paper. If you are attending SANER in March 2024, we hope you’ll talk to us in Rovaniemi, Finland!

—————

The full citation for the paper is:

Gaughan, Matthew, Champion, Kaylea, and & Hwang, Sohyeon. (2024) “Engineering Formality and Software Risk in Debian Python Packages.” In 31st IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER2024) (Short Paper and Posters Track). Rovaniemi, Finland.

We have also released replication materials for the paper, including all the data and code used to conduct the analyses.

This blog post and the paper it describes are collaborative work by Matt Gaughan, Kaylea Champion, and Sohyeon Hwang.

How To Write A Philosophy Paper: Online Guides

Published by Anonymous (not verified) on Wed, 24/01/2024 - 9:40am in

Some philosophy professors, realizing that many of their students are unfamiliar with writing philosophy papers, provide them with “how-to” guides to the task.

[Originally posted on January 15, 2019. Reposted by reader request.]

I thought it might be useful to collect examples of these. If you know of any already online, please mention them in the comments and include links.

If you have a PDF of one that isn’t online that you’d like to share, you can email it to me and I can put in online and add it to the list below.

Guidelines for Students on Writing Philosophy Papers

(Crossed-out text indicates outdated link.)

The post How To Write A Philosophy Paper: Online Guides first appeared on Daily Nous.

A new paper on the risk of nationalist governance capture in self-governed Wikipedia projects

Published by Anonymous (not verified) on Sat, 13/01/2024 - 4:00am in

Tags 

papers, Wikipedia

Wikipedia is one of the most visited websites in the world and the largest online repository of human knowledge. It is also both a target of and a defense against misinformation, disinformation, and other forms of online information manipulation. Importantly, its 300 language editions are self-governed—i.e., they set most of their rules and policies. Our new paper asks: What types of governance arrangements make some self-governed online groups more vulnerable to disinformation campaigns? We answer this question by comparing two Wikipedia language editions—Croatian and Serbian Wikipedia. Despite relying on common software and being situated in a common sociolinguistic environment, these communities differed in how successfully they responded to disinformation-related threats.

For nearly a decade, the Croatian language version of Wikipedia was run by a cabal of far-right nationalists who edited articles in ways that promoted fringe political ideas and involved cases of historical revisionism related to the Ustaše regime, a fascist movement that ruled the Nazi puppet state called the Independent State of Croatia during World War II. This cabal seized complete control of the governance of the encyclopedia, banned and blocked those who disagreed with them, and operated a network of fake accounts to give the appearance of grassroots support for their policies.

Thankfully, Croatian Wikipedia appears to be an outlier. Though both the Croatian and Serbian language editions have been documented to contain nationalist bias and historical revisionism, Croatian Wikipedia alone seems to have succumbed to governance capture: a takeover of the project’s mechanisms and institutions of governance by a small group of users.

The situation in Croatian Wikipedia was well-documented and is now largely fixed, but still know very little about why Croatian Wikipedia was taken over, while other language editions seem to have rebuffed similar capture attempts. In a new paper that is accepted for publication in the Proceedings of the ACM: Human-Computer Interaction (CSCW), we present an interview-based study that tries to explain why Croatian was captured while several other editions facing similar contexts and threats fared better.

Short video presentation of the work given at Wikimania in August 2023.

We interviewed 15 participants from both the Croatian and Serbian Wikipedia projects, as well as the broader Wikimedia movement. Based on insights from these interviews, we arrived at three propositions that, together, help explain why Croatian Wikipedia succumbed to capture while Serbian Wikipedia did not: 

  1. Perceived Value as a Target. Is the project worth expending the effort to capture?
  2. Bureaucratic Openness. How easy is it for contributors outside the core founding team to ascend to local governance positions?
  3. Institutional Formalization. To what degree does the project prefer personalistic, informal forms of organization over formal ones?

The conceptual model from our paper, visualizing possible institutional configurations among Wikipedia projects that affect the risk of governance capture. 

We found that both Croatian Wikipedia and Serbian Wikipedia were attractive targets for far-right nationalist capture due to their sizable readership and resonance with a national identity. However, we also found that the two projects diverged early on in their trajectories in terms of how open they remained to new contributors ascending to local governance positions and the degree to which they privileged informal relationships over formal rules and processes as organizing principles of the project. Ultimately, Croatian’s relative lack of bureaucratic openness and rules constraining administrator behavior created a window of opportunity for a motivated contingent of editors to seize control of the governance mechanisms of the project. 

Though our empirical setting was Wikipedia, our theoretical model may offer insight into the challenges faced by self-governed online communities more broadly. As interest in decentralized alternatives to Facebook and X (formerly Twitter) grows, communities on these sites will likely face similar threats from motivated actors. Understanding the vulnerabilities inherent in these self-governing systems is crucial to building resilient defenses against threats like disinformation. 

For more details on our findings, take a look at the preprint of our paper.

Preprint on arxiv.org: https://arxiv.org/abs/2311.03616. The paper has been accepted for publication in Proceedings of the ACM on Human-Computer Interaction (CSCW) and will be presented at CSCW in 2024. This blog post and the paper it describes are collaborative work by Zarine Kharazian, Benjamin Mako Hill, and Kate Starbird.

Still 'Profiteering From Anxiety'

Published by Anonymous (not verified) on Thu, 07/02/2013 - 8:23am in



Late last year, the excellent Neurobonkers blog covered a case of 'Profiteering from anxiety'.

It seems one Nader Amir has applied for a patent on the psychological technique of 'Attentional Retraining', a method designed to treat anxiety and other emotional problems by conditioning the mind to unconsciously pay more attention to positive things and ignore unpleasant stuff.

For just $139.99, you can have a crack at modifying your unconscious with the help of Amir's Cognitive Retraining Technologies.

It's a clever idea... but hardly a new one. As Neurobonkers said, research on these kinds of methods had been going on for years before Amir came on the scene. In a comment, Prof. Colin MacLeod (who's been researching this stuff for over 20 years) argued that "I do not believe that a US patent granted to Prof Amir for the attentional bias modification approach would withstand challenge."

Well, in an interesting turn of events, Amir has issued just Corrections (1,2) to two of his papers. Both of the articles reported that retraining was an effective treatment for anxiety; but in both cases he now reveals that there was

an error...in the article a disclosure should have
been noted that Nader Amir is the co-founder of a company that markets
anxiety relief products.

Omitting to declare a conflict of interest... how unfortunate.

Still, it's an easy mistake to make: when you're focused on doing unbiased, objective, original research, as Amir doubtless was, such mundane matters are the last thing you tend to pay attention to.

ResearchBlogging.orgAmir, N., and Taylor, C. (2013). Correction to Amir and Taylor (2012). Journal of Consulting and Clinical Psychology, 81 (1), 74-74 DOI: 10.1037/a0031156

Amir, N., Taylor, C., and Donohue, M. (2013). Correction to Amir et al. (2011). Journal of Consulting and Clinical Psychology, 81 (1), 112-112 DOI: 10.1037/a0031157

Another Scuffle In The Coma Ward

Published by Anonymous (not verified) on Tue, 29/01/2013 - 6:22am in

It's not been a good few weeks for Adrian Owen and his team of Canadian neurologists.

Over the past few years, Owen's made numerous waves, thanks to his claim that some patients thought to be in a vegetative state may, in fact, be at least somewhat conscious, and able to respond to commands. Remarkable if true, but not everyone's convinced.

A few weeks ago, Owen et al were criticized over their appearance in a British TV program about their use of fMRI to measure brain activity in coma patients. Now, they're under fire from a second group of critics over a different project.

The new bone of contention is a paper published in 2011 called Bedside detection of awareness in the vegetative state. In this report, Owen and colleagues presented EEG results that, they said, show that some vegetative patients are able to understand speech.

In this study, healthy controls and patients were asked to imagine performing two different actions: moving their hand, or their toe. Owen et al found that it was possible to distinguish between the 'hand' and 'toe'-related patterns of brain electrical activity. This was true of most healthy control subjects, as expected, but also of some - not all - patients in a 'vegetative' state.

The skeptics aren't convinced, however. They reanalyzed the raw EEG data and claim that it just doesn't prove anything.


This image shows that in a healthy control, EEG activity was "clean" and generally normal. However in the coma patient, the data's a mess. It's dominated by large slow delta waves - in healthy people, you only see those during deep sleep - and there's also a lot of muscle artefacts which can be seen as 'thickening' of the lines.

These don't come from the brain at all, they're just muscle twitches. Crucially, the location and power of these twitches varied over time (as muscle spikes often do).

This wouldn't necessarily be a problem, the critics say, except that the statistics used by Owen et al didn't control for slow variations over time i.e. of correlations between consecutive trials (non-independence). If you do take account of these, there's no statistically significant evidence that you can distinguish the EEG associated with 'hand' vs 'toe' in any patients.

However, in their reply, Owen's team say that:

their reanalysis only pushes two of our three positive patients to just beyond the widely accepted p=0.05 threshold for significance - to p=0.06 and p=0·09, respectively. To dismiss the third patient, whose data remain significant, they state that the statistical threshold for accepting command-following should be adjusted for multiple comparisons... but we know of no groups in this field who routinely use such a conservative correction with patient data, including the critics themselves.

I have to say that, statistical arguments aside, the EEGs from the patients just don't look very reliable, largely because of those pesky muscle spikes. A new method for removing these annoyances has just been proposed... I wonder if that could help settle this?

ResearchBlogging.orgGoldfine, A., Bardin, J., Noirhomme, Q., Fins, J., Schiff, N., and Victor, J. (2013). Reanalysis of "Bedside detection of awareness in the vegetative state: a cohort study" The Lancet, 381 (9863), 289-291 DOI: 10.1016/S0140-6736(13)60125-7

Is This How Memory Works?

Published by Anonymous (not verified) on Sun, 27/01/2013 - 8:46pm in

Tags 

papers, Science

We know quite a bit about how long-term memory is formed in the brain - it's all about strengthening of synaptic connections between neurons. But what about remembering something over the course of just a few seconds? Like how you (hopefully) still recall what that last sentence as about?

Short-term memory is formed and lost far too quickly for it to be explained by any (known) kind of synaptic plasticity. So how does it work? British mathematicians Samuel Johnson and colleagues say they have the answer: Robust Short-Term Memory without Synaptic Learning.

They write:

The mechanism, which we call Cluster Reverberation (CR), is very simple. If neurons in a group are more densely connected to each other than to the rest of the network, either because they form a module or because the network is significantly clustered, they will tend to retain the activity of the group: when they are all initially firing, they each continue to receive many action potentials and so go on firing.

The idea is that a neural network will naturally exhibit short-term memory - i.e. a pattern of electrical activity will tend to be maintained over time - so long as neurons are wired up in the form of
clusters of cells mostly connected to their neighbours:



The cells within a cluster (or module) are all connected to each other, so once a module becomes active, it will stay active as the cells stimulate each other.

Why, you might ask, are the clusters necessary? Couldn't each individual cell have a memory - a tendency for its activity level to be 'sticky' over time, so that it kept firing even after it had stopped receiving input?

The authors say that even 'sticky' cells couldn't store memory effectively, because we know that the firing pattern of any individual cell is subject to a lot of random variation. If all of the cells were interconnected, this noise would quickly erase the signal. Clustering overcomes this problem.

But how could a neural clustering system develop in the first place? And how would the brain ensure that the clusters were 'useful' groups, rather than just being a bunch of different neurons doing entirely different things? Here's the clever bit:

If an initially homogeneous (i.e., neither modular nor clustered) area
of brain tissue were repeatedly stimulated with different patterns... then synaptic
plasticity mechanisms might be expected to alter the
network structure in such a way that synapses within each of the imposed
modules would all tend to become strengthened.

In other words, even if the brain started out life with a random pattern of connections, everyday experience (e.g. sensory input) could create a modular structure of just the right kind to allow short-term memory. Incidentally, such a 'modular' network would also be one of those famous small-world networks.

It strikes me as a very elegant model. But it is just a model, and neuroscience has a lot of those; as always, it awaits experimental proof.

One possible implication of this idea, it seems to me, is that short-term memory ought to be pretty conservative, in the sense that it could only store reactivations of existing neural circuits, rather than entirely new patterns of activity. Might it be possible to test that...?

ResearchBlogging.orgJohnson S, Marro J, and Torres JJ (2013). Robust Short-Term Memory without Synaptic Learning. PloS ONE, 8 (1) PMID: 23349664

Is Medical Science Really 86% True?

Published by Anonymous (not verified) on Fri, 25/01/2013 - 5:39am in

The idea that Most Published Research Findings Are False rocked the world of science when it was proposed in 2005. Since then, however, it's become widely accepted - at least with respect to many kinds of studies in biology, genetics, medicine and psychology.

Now, however, a new analysis from Jager and Leek says things are nowhere near as bad after all: only 14% of the medical literature is wrong, not half of it. Phew!

But is this conclusion... falsely positive?

I'm skeptical of this result for two separate reasons. First off, I have problems with the sample of the literature they used: it seems likely to contain only the 'best' results. This is because the authors:

  • only considered the creme-de-la-creme of top-ranked medical journals, which may be more reliable than others.
  • only looked at the Abstracts of the papers, which generally contain the best results in the paper.
  • only included the just over 5000 statistically significant p-values present in the 75,000 Abstracts published. Those papers that put their p-values up front might be more reliable than those that bury them deep in the Results.

In other words, even if it's true that only 14% of the results in these Abstracts were false, the proportion in the medical literature as a whole might be much higher.

Secondly, I have doubts about the statistics. Jager and Leek estimated the proportion of false positive p values, by assuming that true p-values tend to be low: not just below the arbitrary 0.05 cutoff, but well below it.

It turns out that p-values in these Abstracts strongly cluster around 0, and the conclusion is that most of them are real:


But this depends on the crucial assumption that false-positive p values are different from real ones, and equally likely to be anywhere from 0 to 0.05.

"if we consider only the P-­values that are less than 0.05, the P-­values for false positives must be distributed uniformly between 0 and 0.05."

The statement is true in theory - by definition, p values should behave in that way assuming the null hypothesis is true. In theory.

But... we have no way of knowing if it's true in practice. It might well not be.

For example, authors tend to put their best p-values in the Abstract. If they have several significant findings below 0.05, they'll likely put the lowest one up front. This works for both true and false positives: if you get p=0.01 and p=0.05, you'll probably highlight the 0.01. Therefore, false positive p values in Abstracts might cluster low, just like true positives.

Alternatively, false p's could also cluster the other way, just below 0.05. This is because running lots of independent comparisons is not the only way to generate false positives. You can also take almost-significant p's and fudge them downwards, for example by excluding 'outliers', or running slightly different statistical tests. You won't get p=0.06 down to p=0.001 by doing that, but you can get it down to p=0.04.

In this dataset, there's no evidence that p's just below 0.05 were more common. However, in many other sets of scientific papers, clear evidence of such "p hacking" has been found. That reinforces my suspicion that this is an especially 'good' sample.

Anyway, those are just two examples of why false p's might be unevenly distributed; there are plenty of others: 'there are more bad scientific practices in heaven and earth, Horatio, than are dreamt of in your model...'

In summary, although I think the idea of modelling the distribution of true and false findings, and using these models to estimate the proportions of each in a sample, is promising, I think a lot more work is needed before we can be confident in the results of the approach.

A Scuffle In The Coma Ward

Published by Anonymous (not verified) on Fri, 18/01/2013 - 5:12am in


A couple of months ago, the BBC TV show Panorama covered the work of a team of neurologists (led by Prof. Adrian Owen) who are pioneering the use of fMRI scanning to measure brain activity in coma patients.

The startling claim is that some people who have been considered entirely unconscious for years, are actually able to understand speech and respond to requests - not by body movements, but purely on the level of brain activation.

However, not everyone was impressed. A group of doctors swiftly wrote a critical response, published in the British Medical Journal as fMRI for vegetative and minimally conscious states: A more balanced perspective

The Panorama programme... failed to distinguish clearly between vegetative vs. minimally conscious states, and gave the impression that 20% of patients in a vegetative state show cognitive responses on fMRI.

There are important differences between the two states. Patients in a vegetative state have no discernible awareness of self and no cognitive interaction with their environment. Patients in a minimally conscious state show evidence of interaction through behaviours...

The programme presented two patients said to be in a “vegetative state” who showed evidence of cognitive interaction on assessment using fMRI but the clinical methods used for the original diagnosis were not stated. In both cases, family members clearly reported that the patient made positive but inconsistent behavioural responses to questions... one of these patients was filmed responding to a question from his mother by raising his thumb and the other seemed to turn his head purposefully.

So Panorama stands accused of passing off patients who were really minimally conscious, as being in a vegetative state. To see signs of understanding on brain scans from the latter would be truly amazing because it would be the first evidence that they weren't, well, vegetative.

However if they were 'merely' minimally conscious patients, it's not as interesting, because we already knew they were capable of making responses.

Now the Panorama team - and Professor Owen - have replied in a BMJ piece of their own. Given that they're charged with  misleading journalism and sloppy medicine, they're understandably a bit snarky:

Just by viewing this one hour documentary the authors felt able to discern that both the patients “said to be in a vegetative state” are “probably” minimally conscious... One of these patients, Scott, has had the same neurologist for more than a decade. Professor Young, who appeared in the film, made it clear that Scott had appeared vegetative in every assessment...

The fact that these authors took Scott’s fleeting movement, shown in the programme, to indicate a purposeful (“minimally conscious”) response shows why it is so important that the diagnosis is made in person, by an experienced neurologist, using internationally agreed criteria.

In other words, they were vegetative, and the critics who said otherwise, on the basis of some TV footage, were being silly.

In other words...it's on.

ResearchBlogging.orgTurner-Stokes L, Kitzinger J, Gill-Thwaites H, Playford ED, Wade D, Allanson J, Pickard J, & Royal College of Physicians' Prolonged Disorders of Consciousness Guidelines Development Group (2012). fMRI for vegetative and minimally conscious states. BMJ (Clinical research ed.), 345 PMID: 23190911

Walsh F, Simmonds F, Young GB, & Owen AM (2013). Panorama responds to editorial on fMRI for vegetative and minimally conscious states. BMJ (Clinical research ed.), 346 PMID: 23298817

Pages