Machines are getting schooled on fairness

You’ve probably encountered at least one machine-learning algorithm today. These clever computer codes sort search engine results, weed spam e-mails from inboxes and optimize navigation routes in real time. People entrust these programs with increasingly complex — and sometimes life-changing — decisions, such as diagnosing diseases and predicting criminal activity.

Machine-learning algorithms can make these sophisticated calls because they don’t simply follow a series of programmed instructions the way traditional algorithms do. Instead, these souped-up programs study past examples of how to complete a task, discern patterns from the examples and use that information to make decisions on a case-by-case basis.
Unfortunately, letting machines with this artificial intelligence, or AI, figure things out for themselves doesn’t just make them good critical “thinkers,” it also gives them a chance to pick up biases.

Investigations in recent years have uncovered several ways algorithms exhibit discrimination. In 2015, researchers reported that Google’s ad service preferentially displayed postings related to high-paying jobs to men. A 2016 ProPublica investigation found that COMPAS, a tool used by many courtrooms to predict whether a criminal will break the law again, wrongly predicted that black defendants would reoffend nearly twice as often as it made that wrong prediction for whites. The Human Rights Data Analysis Group also showed that the crime prediction tool PredPol could lead police to unfairly target low-income, minority neighborhoods (SN Online: 3/8/17). Clearly, algorithms’ seemingly humanlike intelligence can come with humanlike prejudices.

“This is a very common issue with machine learning,” says computer scientist Moritz Hardt of the University of California, Berkeley. Even if a programmer designs an algorithm without prejudicial intent, “you’re very likely to end up in a situation that will have fairness issues,” Hardt says. “This is more the default than the exception.”
Developers may not even realize a program has taught itself certain prejudices. This problem gets down to what is known as a black box issue: How exactly is an algorithm reaching its conclusions? Since no one tells a machine-learning algorithm exactly how to do its job, it’s often unclear — even to the algorithm’s creator — how or why it ends up using data the way it does to make decisions.
Several socially conscious computer and data scientists have recently started wrestling with the problem of machine bias. Some have come up with ways to add fairness requirements into machine-learning systems. Others have found ways to illuminate the sources of algorithms’ biased behavior. But the very nature of machine-learning algorithms as self-taught systems means there’s no easy fix to make them play fair.

Learning by example
In most cases, machine learning is a game of algorithm see, algorithm do. The programmer assigns an algorithm a goal — say, predicting whether people will default on loans. But the machine gets no explicit instructions on how to achieve that goal. Instead, the programmer gives the algorithm a dataset to learn from, such as a cache of past loan applications labeled with whether the applicant defaulted.

The algorithm then tests various ways to combine loan application attributes to predict who will default. The program works through all of the applications in the dataset, fine-tuning its decision-making procedure along the way. Once fully trained, the algorithm should ideally be able to take any new loan application and accurately determine whether that person will default.

The trouble arises when training data are riddled with biases that an algorithm may incorporate into its decisions. For instance, if a human resources department’s hiring algorithm is trained on historical employment data from a time when men were favored over women, it may recommend hiring men more often than women. Or, if there were fewer female applicants in the past, then the algorithm has fewer examples of those applications to learn from, and it may not be as accurate at judging women’s applications.
At first glance, the answer seems obvious: Remove any sensitive features, such as race or sex, from the training data. The problem is, there are many ostensibly nonsensitive aspects of a dataset that could play proxy for some sensitive feature. Zip code may be strongly related to race, college major to sex, health to socioeconomic status.

And it may be impossible to tell how different pieces of data — sensitive or otherwise — factor into an algorithm’s verdicts. Many machine-learning algorithms develop deliberative processes that involve so many thousands of complex steps that they’re impossible for people to review.

Creators of machine-learning systems “used to be able to look at the source code of our programs and understand how they work, but that era is long gone,” says Simon DeDeo, a cognitive scientist at Carnegie Mellon University in Pittsburgh. In many cases, neither an algorithm’s authors nor its users care how it works, as long as it works, he adds. “It’s like, ‘I don’t care how you made the food; it tastes good.’ ”

But in other cases, the inner workings of an algorithm could make the difference between someone getting parole, an executive position, a mortgage or even a scholarship. So computer and data scientists are coming up with creative ways to work around the black box status of machine-learning algorithms.

Setting algorithms straight
Some researchers have suggested that training data could be edited before given to machine-learning programs so that the data are less likely to imbue algorithms with bias. In 2015, one group proposed testing data for potential bias by building a computer program that uses people’s nonsensitive features to predict their sensitive ones, like race or sex. If the program could do this with reasonable accuracy, the dataset’s sensitive and nonsensitive attributes were tightly connected, the researchers concluded. That tight connection was liable to train discriminatory machine-learning algorithms.

To fix bias-prone datasets, the scientists proposed altering the values of whatever nonsensitive elements their computer program had used to predict sensitive features. For instance, if their program had relied heavily on zip code to predict race, the researchers could assign fake values to more and more digits of people’s zip codes until they were no longer a useful predictor for race. The data could be used to train an algorithm clear of that bias — though there might be a tradeoff with accuracy.

On the flip side, other research groups have proposed de-biasing the outputs of already-trained machine-learning algorithms. In 2016 at the Conference on Neural Information Processing Systems in Barcelona, Hardt and colleagues recommended comparing a machine-learning algorithm’s past predictions with real-world outcomes to see if the algorithm was making mistakes equally for different demographics. This was meant to prevent situations like the one created by COMPAS, which made wrong predictions about black and white defendants at different rates. Among defendants who didn’t go on to commit more crimes, blacks were flagged by COMPAS as future criminals more often than whites. Among those who did break the law again, whites were more often mislabeled as low-risk for future criminal activity.

For a machine-learning algorithm that exhibits this kind of discrimination, Hardt’s team suggested switching some of the program’s past decisions until each demographic gets erroneous outputs at the same rate. Then, that amount of output muddling, a sort of correction, could be applied to future verdicts to ensure continued even-handedness. One limitation, Hardt points out, is that it may take a while to collect a sufficient stockpile of actual outcomes to compare with the algorithm’s predictions.
A third camp of researchers has written fairness guidelines into the machine-learning algorithms themselves. The idea is that when people let an algorithm loose on a training dataset, they don’t just give the software the goal of making accurate decisions. The programmers also tell the algorithm that its outputs must meet some certain standard of fairness, so it should design its decision-making procedure accordingly.

In April, computer scientist Bilal Zafar of the Max Planck Institute for Software Systems in Kaiserslautern, Germany, and colleagues proposed that developers add instructions to machine-learning algorithms to ensure they dole out errors to different demographics at equal rates — the same type of requirement Hardt’s team set. This technique, presented in Perth, Australia, at the International World Wide Web Conference, requires that the training data have information about whether the examples in the dataset were actually good or bad decisions. For something like stop-and-frisk data, where it’s known whether a frisked person actually had a weapon, the approach works. Developers could add code to their program that tells it to account for past wrongful stops.

Zafar and colleagues tested their technique by designing a crime-predicting machine-learning algorithm with specific nondiscrimination instructions. The researchers trained their algorithm on a dataset containing criminal profiles and whether those people actually reoffended. By forcing their algorithm to be a more equal opportunity error-maker, the researchers were able to reduce the difference between how often blacks and whites who didn’t recommit were wrongly classified as being likely to do so: The fraction of people that COMPAS mislabeled as future criminals was about 45 percent for blacks and 23 percent for whites. In the researchers’ new algorithm, misclassification of blacks dropped to 26 percent and held at 23 percent for whites.

These are just a few recent additions to a small, but expanding, toolbox of techniques for forcing fairness on machine-learning systems. But how these algorithmic fix-its stack up against one another is an open question since many of them use different standards of fairness. Some require algorithms to give members of different populations certain results at about the same rate. Others tell an algorithm to accurately classify or misclassify different groups at the same rate. Still others work with definitions of individual fairness that require algorithms to treat people who are similar barring one sensitive feature similarly. To complicate matters, recent research has shown that, in some cases, meeting more than one fairness criterion at once can be impossible.

“We have to think about forms of unfairness that we may want to eliminate, rather than hoping for a system that is absolutely fair in every possible dimension,” says Anupam Datta, a computer scientist at Carnegie Mellon.

Still, those who don’t want to commit to one standard of fairness can perform de-biasing procedures after the fact to see whether outputs change, Hardt says, which could be a warning sign of algorithmic bias.

Show your work
But even if someone discovered that an algorithm fell short of some fairness standard, that wouldn’t necessarily mean the program needed to be changed, Datta says. He imagines a scenario in which a credit-classifying algorithm might give favorable results to some races more than others. If the algorithm based its decisions on race or some race-related variable like zip code that shouldn’t affect credit scoring, that would be a problem. But what if the algorithm’s scores relied heavily on debt-to-income ratio, which may also be associated with race? “We may want to allow that,” Datta says, since debt-to-income ratio is a feature directly relevant to credit.

Of course, users can’t easily judge an algorithm’s fairness on these finer points when its reasoning is a total black box. So computer scientists have to find indirect ways to discern what machine-learning systems are up to.

One technique for interrogating algorithms, proposed by Datta and colleagues in 2016 in San Jose, Calif., at the IEEE Symposium on Security and Privacy, involves altering the inputs of an algorithm and observing how that affects the outputs. “Let’s say I’m interested in understanding the influence of my age on this decision, or my gender on this decision,” Datta says. “Then I might be interested in asking, ‘What if I had a clone that was identical to me, but the gender was flipped? Would the outcome be different or not?’ ” In this way, the researchers could determine how much individual features or groups of features affect an algorithm’s judgments. Users performing this kind of auditing could decide for themselves whether the algorithm’s use of data was cause for concern. Of course, if the code’s behavior is deemed unacceptable, there’s still the question of what to do about it. There’s no “So your algorithm is biased, now what?” instruction manual.
The effort to curb machine bias is still in its nascent stages. “I’m not aware of any system either identifying or resolving discrimination that’s actively deployed in any application,” says Nathan Srebro, a computer scientist at the University of Chicago. “Right now, it’s mostly trying to figure things out.”

Computer scientist Suresh Venkatasubramanian agrees. “Every research area has to go through this exploration phase,” he says, “where we may have only very preliminary and half-baked answers, but the questions are interesting.”

Still, Venkatasubramanian, of the University of Utah in Salt Lake City, is optimistic about the future of this important corner of computer and data science. “For a couple of years now … the cadence of the debate has gone something like this: ‘Algorithms are awesome, we should use them everywhere. Oh no, algorithms are not awesome, here are their problems,’ ” he says. But now, at least, people have started proposing solutions, and weighing the various benefits and limitations of those ideas. So, he says, “we’re not freaking out as much.”

Telling children they’re smart could tempt them to cheat

It’s hard not to compliment kids on certain things. When my little girls fancy themselves up in tutus, which is every single time we leave the house, people tell them how pretty they are. I know these folks’ intentions are good, but an abundance of compliments on clothes and looks sends messages I’d rather my girls didn’t absorb at ages 2 and 4. Or ever, for that matter.

Our words, often spoken casually and without much thought, can have a big influence on little kids’ views of themselves and their behaviors. That’s very clear from two new studies on children who were praised for being smart.

The studies, conducted in China on children ages 3 and 5, suggest that directly telling kids they’re smart, or that other people think they’re intelligent, makes them more likely to cheat to win a game.

In the first study, published September 12 in Psychological Science, 150 3-year-olds and 150 5-year-olds played a card guessing game. An experimenter hid a card behind a barrier and the children had to guess whether the card’s number was greater or less than six. In some early rounds of the game, a researcher told some of the children, “You are so smart.” Others were told, “You did very well this time.” Still others weren’t praised at all.

Just before the kids guessed the final card in the game, the experimenter left the room, but not before reminding the children not to peek. A video camera monitored the kids as they sat alone.

The children who had been praised for being smart were more likely to peek, either by walking around or leaning over the barrier, than the children in the other two groups, the researchers found. Among 3-year-olds who had been praised for their ability (“You did very well this time.”) or not praised at all, about 40 percent cheated. But the share of cheaters jumped to about 60 percent among the 3-year-olds who had been praised as smart. Similar, but slightly lower, numbers were seen for the 5-year-olds.

In another paper, published July 12 in Developmental Science, the same group of researchers tested whether having a reputation for smarts would have an effect on cheating. At the beginning of a similar card game played with 3- and 5-year-old Chinese children, researchers told some of the kids that they had a reputation for being smart. Other kids were told they had a reputation for cleanliness, while a third group was told nothing about their reputation. The same phenomenon emerged: Kids told they had a reputation for smarts were more likely than the other children to peek at the cards.
The kids who cheated probably felt more pressure to live up to their smart reputation, and that pressure may promote winning at any cost, says study coauthor Gail Heyman. She’s a psychologist at the University of California, San Diego and a visiting professor at Zhejiang Normal University in Jinhua, China. Other issues might be at play, too, she says, “such as giving children a feeling of superiority that gives them a sense that they are above the rules.”

Previous research has suggested that praising kids for their smarts can backfire in a different way: It might sap their motivation and performance.

Heyman was surprised to see that children as young as 3 shifted their behavior based on the researchers’ comments. “I didn’t think it was worth testing children this age, who have such a vague understanding of what it means to be smart,” she says. But even in these young children, words seemed to have a powerful effect.

The results, and other similar work, suggest that parents might want to curb the impulse to tell their children how smart they are. Instead, Heyman suggests, keep praise specific: “You did a nice job on the project,” or “I like the solution you came up with.” Likewise, comments that focus on the process are good choices: “How did you figure that out?” and “Isn’t it fun to struggle with a hard problem like that?”

It’s unrealistic to expect parents — and everyone else who comes into contact with children — to always come up with the “right” compliment. But I do think it’s worth paying attention to the way we talk with our kids, and what we want them to learn about themselves. These studies have been a good reminder for me that comments made to my kids — by anyone — matter, perhaps more than I know.

Saturn’s rings mess with the gas giant’s atmosphere

NEW ORLEANS — Saturn’s mighty rings cast a long shadow on the gas giant — and not just in visible light.

Final observations from the Cassini spacecraft show that the rings block the sunlight that charges particles in Saturn’s atmosphere. The rings may even be raining charged water particles onto the planet, researchers report online December 11 in Science and at the fall meeting of the American Geophysical Union.

In the months before plunging into Saturn’s atmosphere in September (SN Online: 9/15/17), the Cassini spacecraft made a series of dives between the gas giant and its iconic rings (SN Online: 4/21/17). Some of those orbits took the spacecraft directly into Saturn’s ionosphere, a layer of charged particles in the upper atmosphere. The charged particles are mostly the result of ultraviolet radiation from the sun separating electrons from atoms.
Jan-Erik Wahlund of the Swedish Institute of Space Physics in Uppsala and Ann Persoon of the University of Iowa in Iowa City and their colleagues examined data from 11 of Cassini’s dives through the rings. The researchers found a lower density of charged particles in the regions associated with the ring shadows than elsewhere in the ionosphere. That finding suggests the rings block ultraviolet light, the team concludes.

Blocked sunlight can’t explain everything surprising about the ionosphere, though. The ionosphere was more variable than the researchers expected, with its electron density sometimes changing by more than an order of magnitude from one Cassini orbit to the next.

Charged water particles chipped off of the rings could periodically splash into the ionosphere and sop up the free electrons, the researchers suggest. This idea, known as “ring rain,” was proposed in the 1980s (SN: 8/9/86, p. 84) but has still never been observed directly.

Hubble telescope ramps up search for Europa’s watery plumes

OXON HILL, Md. — Astronomers may soon know for sure if Europa is spouting off. After finding signs that Jupiter’s icy moon emits repeating plumes of water near its southern pole, astronomers using the Hubble Space Telescope hope to detect more evidence of the geysers.

“The statistical significance is starting to look pretty good,” astronomer William Sparks of the Space Telescope Science Institute in Baltimore says. He presented preliminary results on the hunt for the plumes at a meeting of the American Astronomical Society on January 9.
Sparks’ team started observing Europa on January 5, hoping to catch it passing in front of Jupiter 30 times before September. Hubble can detect active plumes silhouetted against background light from Jupiter. If the plume repeats as often as it seems to, “it’s essentially a certainty we’ll see it again if it’s real,” Sparks said.

Europa probably hosts a vast saltwater ocean buried under a thick icy shell. In 2012, astronomers using Hubble spotted high concentrations of hydrogen and oxygen over Europa’s southern hemisphere — signs that Europa was spitting water into space (SN: 1/25/14, p. 6). Later efforts to find those signs using the same technique yielded nothing.

But using Jupiter as a backdrop for the plumes, Sparks and his colleagues spotted several eruptions (SN Online: 9/26/16) — once in March 2014, again in February 2016 and possibly also in March 2017, Sparks said.

Story continues below images
Maps of Europa’s heat and ionosphere made by the Galileo spacecraft in the 1990s show the plumes’ location was warmer than the surrounding ice. It also had an unusually high concentration of charged particles, perhaps the result of water splitting into hydrogen and oxygen. Both observations support the idea that some ocean is escaping at that spot.

“If it’s a coincidence, it’s a hell of a coincidence,” Sparks says.

Let your kids help you, and other parenting tips from traditional societies

Hunter-gatherers and farming villagers don’t write parenting handbooks, much less read them. But parents in WEIRD societies — Western, educated, industrialized, rich and democratic — can still learn a few childrearing lessons from their counterparts in small-scale societies.

It’s not that Western parents and kids are somehow deficient. But we live in a culture that holds historically unprecedented expectations about how to raise children. Examples: Each child is a unique individual who must be allowed to make decisions independently; children are precious and innocent, so their needs are more important than those of adults; and kids need to be protected from themselves by constant adult supervision.
When compared to family life in foraging and farming cultures, and in WEIRD societies only a few decades ago, there is nothing “normal” about parenting convictions such as these.

“Childhood, as we now know it, is a thoroughly modern invention,” says anthropologist David Lancy of Utah State University in Logan. He has studied traditional societies for more than 40 years.

In his book Raising Children: Surprising Insights from Other Cultures, Lancy examines what’s known about bringing up kids in hunter-gatherer groups and farming villages. Among the highlights:

Babies are usually regarded as nonpeople, requiring swaddling and other special procedures over months or years to become a human being.
Children are typically the lowest-ranking community members.
Because kids can’t feed and protect themselves, they accumulate a moral debt to their elders that takes years of hard work to repay.
If that sounds harsh to WEIRD ears, withhold judgment before considering these child-rearing themes from traditional cultures.

Allow for make-believe about real life
Hunter-gatherer and village kids intently observe and imitate adults (SN: 2/17/18, p. 22). Playtime often consists of youngsters of various ages acting out and even parodying adult behaviors. Virtually everything, from relations between the sexes to religious practices, is fair game. Kids scavenge for props, assign each other roles and decide what the cast of characters will say.

Western children would benefit from many more chances to play in unsupervised, mixed-age groups, Lancy says.

Let kids play collaborative games
A big advantage of play groups of kids of all ages is that they become settings for games in which kids negotiate the rules. Until recently, these types of games, such as marbles, hopscotch and jump rope, were common among U.S. children.

Not anymore, at least not in neighborhoods dominated by adult-supervised play dates and sports teams. Sure, tempers can flare as village youngsters hash out rules for marbles or jacks. But negotiations rarely go off the rails. Older kids handicap themselves so that younger children can sometimes win a game. Concessions are made even for toddlers.

The point is to maintain good enough relations to keep adults from intruding. In modern societies, Lancy suspects, bullying flourishes when kids don’t learn early on how to play collaboratively.

Put young children to work
In most non-WEIRD societies, miniature and cast-off tools and utensils, including knives, are the toys of choice for kids of all ages. Play represents a way to prepare for adult duties and, when possible, work alongside adults as helpers.

Western parents can find ways for preschoolers to help out around the house, but it demands flexibility and patience. Lancy suggests making allowances for a 3-year-old who mixes up socks when sorting the laundry. Maybe paper plates are needed until a kitchen helper becomes less apt to drop them.

Still, carefully selected jobs for 3- and 4-year-olds promote a sense of obligation and sympathy toward others, Lancy says. Western kids given chances to help adults early on may, like their non-WEIRD peers, willingly perform chores at later ages, he predicts.

Whether children live in city apartments or forest huts, having the freedom to explore and play with no adults around proves an antidote to boredom. Lancy recalls how boredom-busting works from his own early childhood in rural Pennsylvania during the 1950s. His family lived in a house bordering a river. Lancy would sit on the river bank for up to an hour at a time. His mother liked to tell visitors a story that, when asked what he had been doing, the boy replied “watching the ‘flections.”

To hear the beat, your brain may think about moving to it

If you’ve ever felt the urge to tap along to music, this research may strike a chord.

Recognizing rhythms doesn’t involve just parts of the brain that process sound — it also relies on a brain region involved with movement, researchers report online January 18 in the Journal of Cognitive Neuroscience. When an area of the brain that plans movement was disabled temporarily, people struggled to detect changes in rhythms.

The study is the first to connect humans’ ability to detect rhythms to the posterior parietal cortex, a brain region associated with planning body movements as well as higher-level functions such as paying attention and perceiving three dimensions.
“When you’re listening to a rhythm, you’re making predictions about how long the time interval is between the beats and where those sounds will fall,” says coauthor Jessica Ross, a neuroscience graduate student at the University of California, Merced. These predictions are part of a system scientists call relative timing, which helps the brain process repetitive sounds, like a musical rhythm.

“Music is basically sounds that have a structure in time,” says Sundeep Teki, a neuroscientist at the University of Oxford who was not involved with the study. Studies like this, which investigate where relative timing takes place in the brain, could be crucial to understanding how the brain deciphers music, he says.

Researchers found hints of the relative timing system in the 1980s, when observing that Parkinson’s patients with damaged areas of the brain that control motion also had trouble detecting rhythms. But it wasn’t clear that those regions were causing patients’ difficulty with timing — Parkinson’s disease can wreak havoc on many areas of the brain.
Ross and her colleagues applied magnetic pulses to two different areas of the brain in 25 healthy adults. Those areas — the posterior parietal cortex and the supplementary motor area, which controls movement — were then unable to function properly for about an hour.

Suppressing activity in the supplementary motor area caused no significant change in participants’ ability to follow a beat. But when the posterior parietal cortex was suppressed, all of the adults had trouble keeping rhythm. For example, when listening to music overlaid with beeps that were on the beat as well as off the beat, participants frequently failed to differentiate between the two. This finding suggests the posterior parietal cortex is necessary for relative timing, the researchers say.

The brain has another timing system that was unaffected by the suppression of activity in either brain region: discrete timing, which keeps track of duration. Participants could distinguish between two notes held for different amounts of time. Ross says this suggests that discrete timing is governed by other parts of the brain. Adults also had no trouble differentiating fast and slow tempos, despite tempo’s connection to rhythm, which might imply the existence of a third timing system, Ross says.

Research into how the brain processes time, sound and movement has implications for understanding how humans listen to music and speech, as well as for treating diseases like Parkinson’s.

Still, many questions about the brain’s timing mechanisms remain (SN: 07/25/15, p. 20): What are the evolutionary origins of different timing mechanisms? How do they work in conjunction to create musical perception? And why do most other animals seem to lack a relative timing system?

Scientists are confident that they will have answers — all in good time.

New mapping shows just how much fishing impacts the world’s seas

Fishing has left a hefty footprint on Earth. Oceans cover more than two-thirds of the planet’s surface, and industrial fishing occurred across 55 percent of that ocean area in 2016, researchers report in the Feb. 23 Science. In comparison, only 34 percent of Earth’s land area is used for agriculture or grazing.

Previous efforts to quantify global fishing have relied on a hodgepodge of scant data culled from electronic monitoring systems on some vessels, logbooks and onboard observers. But over the last 15 years, most commercial-scale ships have been outfitted with automatic identification system (AIS) transceivers, a tracking system meant to help ships avoid collisions.
In the new study, the researchers examined 22 billion AIS positions from 2012 through 2016. Using a computer trained with a type of machine learning, the team then identified more than 70,000 fishing vessels and tracked their activity.

Much of the fishing was concentrated in countries’ exclusive economic zones — ocean regions within about 370 kilometers of a nation’s coastline — and in certain hot spots farther out in the open ocean, the team found. Such hot spots included the northeastern Atlantic Ocean and the nutrient-rich upwelling regions off the coasts of South America and West Africa.

Surprisingly, just five countries — China, Spain, Taiwan, Japan and South Korea — accounted for nearly 85 percent of fishing efforts on the high seas, the regions outside of any country’s exclusive economic zone.

Tracking the fishing footprint in space and time, the researchers note, can help guide marine environmental protections and international conservation efforts for fish. That may be particularly important in a time of rapid change due to rising ocean temperatures and increasing human activity on the high seas.

Water may have killed Mars’ magnetic field

THE WOODLANDS, Texas — Mars’ missing magnetic field may have drowned in the planet’s core.

An excess of hydrogen, split off from water molecules and stored in the Martian mantle, could have shut down convection, switching the magnetic field off forever, planetary scientist Joseph O’Rourke proposed March 21 at the Lunar and Planetary Science Conference.

Planetary scientists think magnetic fields are produced by the churning of a planet’s molten iron core. Convection relies on denser materials sinking into the core, and lighter stuff rising to the surface. The movement of iron, which can carry a charge, generates a strong magnetic field that can protect a planet’s atmosphere from being ravaged by solar wind (SN Online: 8/18/17).
But if lighter material, like hydrogen, settles close to the iron core, it could block dense material from sinking deep enough to keep convection going, said O’Rourke, of Arizona State University in Tempe.

“Too much hydrogen and you can shut down convection entirely,” he said. “Hydrogen is a heartless killer.”

O’Rourke and his ASU colleague S.-H. Dan Shim suggested the hydrogen could come from water locked up in Martian minerals. Near the hot core, water would split into hydrogen and oxygen. The oxygen would form compounds with other elements and stay high in the mantle, but the hydrogen could sit atop the core and effectively suffocate the dynamo.
The question is whether Mars’ minerals would have had what it took to deliver the hydrogen at the right time. Mars’ crust is rich in the mineral olivine, which does not bond well with water and so is relatively dry.

In the planet’s interior, pressure forces olivine to transform into the minerals wadsleyite and ringwoodite, which hold more water. Deeper still, the mineral turns into bridgmanite and becomes dry again. For a time, that bridgmanite layer could act as a buffer against water, allowing the core to keep convecting. But as the mantle cooled, the bridgmanite layer would shrink and eventually disappear, O’Rourke’s study suggests.

Whether Mars’ interior ever had that saving layer of bridgmanite depends on how big its core is — a property that may be tested by NASA’s InSight Mars lander, launching on May 5, O’Rourke said. Mars did have a magnetic field more than 4 billion years ago. Scientists have struggled to explain how it vanished, leaving the planet vulnerable to solar winds, which probably stripped away its atmosphere and surface water (SN: 12/12/15, p. 31).

If hydrogen shut down the planet’s generator, it would have had to act fast. Previous observations suggest the magnetic field disappeared relatively rapidly, over 100 million years.

Another theory by James Roberts of the Johns Hopkins Applied Physics Lab in Laurel, Md., suggests a large impact could have shut down the dynamo by heating the outermost core, which would have kept it from sinking.

“It’s actually a similar idea to O’Rourke’s,” Roberts says. It may take many more sophisticated Mars missions to figure out what really happened.

How honeybees’ royal jelly might be baby glue, too

Honeybee royal jelly is food meant to be eaten on the ceiling. And it might also be glue that keeps a royal baby in an upside-down cradle.

These bees raise their queens in cells that can stay open at the bottom for days. A big blob of royal jelly, abundantly resupplied by worker bees, surrounds the larva at the ceiling. Before the food is deposited in the cell, it receives a last-minute jolt of acidity that triggers its proteins to thicken into goo, says Anja Buttstedt, a protein biochemist at Technische Universität Dresden in Germany. Basic larva-gripping tests suggest the jelly’s protein chemistry helps keep future queens from dropping out of their cells, Buttstedt and colleagues propose March 15 in Current Biology.
Suspecting the stickiness of royal jelly might serve some function, researchers tweaked its acidity. They then filled small cups with royal jelly with different pH levels and gently turned the cups upside down. At a natural royal jelly acidity of about pH 4.0, all 10 larvae dangled from their gooey blobs upside down overnight. But in jelly boosted to pH 4.8 (and thinned in the process), four of the 10 larvae dropped from the cups. At pH 5.9, all of them dropped.

Honeybees build several forms of royally oversized cells for raising a queen. Those for queens who will swarm with their workers to a new home hang from the rim of an array of regular cells. A hole stays open at the bottom of the cell until the larva nears pupation from her fat grub shape into a queen with wings. That hole at the bottom is big enough for a royal larva to fall through, confirms insect physiologist Steven Cook at the honeybee research lab in Beltsville, Md., run by the U.S. Department of Agriculture’s Agricultural Research Service.

Buttstedt and colleagues propose that the stickiness of royal jelly may be what keeps the larva in place. The team worked out how the jelly’s proteins change as it is made, and how those changes affect its consistency.

Royal jelly is secreted as a brew of proteins from the glands above a worker bee’s brain. At that point, it has a neutral pH, around 7, like water’s. The worker bee then adds fatty acids from glands in her mouthparts, which take the pH to around 4.
“It has a quite sour smell,” Buttstedt says. As for taste? “Really weird.” A steady diet of this jelly is what turns a larvae into a queen instead of a worker.

At pH 4, the jelly’s most common protein, MRJP1, goes complicated. When the protein leaves the glands above the brain, it’s clustered in groups of four along with smaller proteins called apisimins, the team found. When the acidity shifts, the MRJP1 foursomes and the apisimins hook together in slender fibers and get gluey.

“The most puzzling question,” Buttstedt says, is “why build upside-down queen cells in the first place?”

Delusions of skin infestation may not be so rare

Delusional infestation
de-LU-zhen-al in-fes-TAY-shun n.
A deep conviction that one’s skin is contaminated with insects or other objects despite a lack of medical evidence.

She was certain her skin was infested: Insects were jumping off; fibers were poking out. Fearful her condition could spread to others, the 50-year-old patient told doctors at the Mayo Clinic in Rochester, Minn., that she was avoiding contact with her children and friends.

The patient had delusional infestation, explains Mayo Clinic dermatologist Mark Davis. Sufferers have an unshaking belief that pathogens or inanimate objects pollute their skin despite no medical evidence. Davis and colleagues report online April 4 in JAMA Dermatology that the disorder is not as rare as previously assumed.
In the first population-based study of the disorder’s prevalence, the researchers identified 35 cases from 1976 to 2010 reported in Minnesota’s Olmsted County. Based on the findings, the authors estimate 27 out of every 100,000 people in the United States have delusional infestation. Due to the county’s lack of diversity — the population of about 150,000 is predominantly white — the researchers used only the nationwide white population to estimate prevalence, so the result may not be representative of other populations.

Delusional infestation has been recognized for decades, albeit under different names. Patients insist they’ve been overtaken with creatures, such as insects, worms or parasites, or inanimate materials like fibers — or both.
“It’s like aliens have infested their skin,” Davis says. Some present bagged samples of the claimed culprits, which turn out to be such debris as sand, dander or, as in the case of the 50-year-old woman, bits of skin and scabs. When lab tests confirm no infestation, patients often seek another opinion rather than accept the findings. Some attempt risky self-treatments, such as bathing in kerosene or bleach, or tweezing or cutting the skin.

Schizophrenia, dementia or other psychiatric illnesses can trigger delusional infestation. So can such drugs as amphetamines or cocaine. But when no other illness is involved, patients often reject the notion that the issue is psychiatric and tend to refuse the antipsychotic medications that can help, Davis says.

As for the 50-year-old patient, upset with the doctors’ diagnosis, she no longer comes to the Mayo Clinic.