Don’t blame the tryptophan in your Thanksgiving turkey. The post-dinner drowsiness probably results from carbs and alcohol
— Read more on ScientificAmerican.com
A scientist tracks the dangers of flame retardants, meant to protect children, and why manufacturers cannot seem to stop using them.
If everything goes according to plan, 2024 will see NASA launch the Europa Clipper program, which is designed to make multiple passes to study the water-rich moon’s potential to host life. The big challenge the mission will face is that any liquid water is likely to be far below Europa’s icy surface. At best, we can hope for some indication of what’s going on based on the composition of any material trapped in the ice itself or the possible presence of geysers that release bits of its interior to space.
That makes understanding the sorts of remote sensing that might be possible critical. And, to that end, some NASA scientists have looked into how ice behaves in Jupiter’s high-radiation environment. The scientists found out that Europa’s ice probably glows in the dark, and that glow may carry some information about what’s present in the ice.
The mechanism that can make Europa’s ice glow is a bit like what’s used by a black light poster. There, light outside of the visible wavelengths excites molecules that then release the energy at wavelengths we can see. In the case of Europa, the excitation energy doesn’t come from light, but that energy is indirectly powered by Jupiter’s magnetic fields. These fields pick up charged particles liberated by the planet itself (or one of its moons) and accelerate them. (Much of the material in Jupiter’s high-radiation areas was expelled into space by volcanoes on its moon Io.)
Only a few weeks after the successful public offering of Array Technologies proved that there’s a market for technologies aimed at improving efficiencies across the solar manufacturing and installation chain, Leading Edge Equipment has raised capital for its novel silicon wafer manufacturing equipment.
For the last few years researchers have been talking up the potential of so-called kerfless, single-crystal silicon wafers. For industry watchers, the single-crystal versus poly-crystalline wafers may sound familiar, but as with many things with the resurgence of climate technology investment maybe this time will be different.
Silicon wafer production today is a seven-step process in which large silicon ingots created in heavily energy-intensive furnaces are sawed into wafers by wires. The process wastes large amounts of silicon, requires an incredible amount of energy and produces low-quality wafers that reduce the efficiency of solar panels.
Using ribbons to produce its wafers, Leading Edge’s manufacturing equipment uses the floating silicon method to reduce production to a single step, consuming less energy and producing almost no waste, according to the company.
Founded by longtime experts in the silicon foundry industry — Alison Greenlee, a quadruple-degreed graduate of the Massachusetts Institute of Technology who worked on floating silicon method that reduces waste in the manufacturing of silicon for solar cells; and Peter Kellerman, the progenitor of floating silicon method technologies.
The two founded Leading Edge Equipment to rejuvenate a project that had been mothballed by Applied Materials after years of research.
The two won $5 million in federal grants and raised an initial $6 million from venture capital firms in 2018 to kick off the technology.
Leading Edge expects that its equipment could become the standard for silicon substrate manufacturing.
Kellerman, now the emeritus chief technology officer, was replaced by Nathan Stoddard, a seasoned silicon manufacturing technology expert who has worked on teams that have brought three different solar wafer technologies from concept to pilot production. Stoddard, a former colleague of Greenlee’s at 1366 — one of the early companies devoted to new silicon production technologies — was won over by Greenlee and Kellerman’s belief in the old Applied Materials technology.
The company claims that its technology can reduce wafer costs by 50 percent, increases commercial solar panel power by up to seven percent, and reduces manufacturing emissions by over 50 percent.
To commercialize the project, earlier this year the team brought in Rick Schwerdtfeger, a longtime innovator in solar technology who began working with CIGS crystals back in 1995. In the 2000s Schwerdtfeger spent his time in building out ARC Energy to scale next-generation furnace technologies.
“After critical technology demonstrations and the development of a new commercial tool, we are now ready to launch this technology into market in 2021,” said Schwerdtfeger in a statement. “Having recently secured a 31,000 square foot facility and doubled the size of our team, we will use this new funding to prepare for our 2021 commercial pilots.”
Capsaicin is the compound responsible for determining just how hot a variety of chili pepper will be; the higher the capsaicin levels, the hotter the pepper. There are several methods for quantifying just how much capsaicin is present in a pepper—its “pungency”—but they are either too time-consuming, too costly, or require special instruments, making them less than ideal for widespread use.
Now a team of scientists from Prince of Songkla University in Thailand has developed a simple, portable sensor device that can connect to a smartphone to show how much capsaicin is contained in a given chili pepper sample, according to a new paper in the journal ACS Applied Nano Materials. Bonus: the device is whimsically shaped just like a red-hot chili pepper.
An American pharmacist named Wilbur Scoville invented his eponymous Scoville scale for assessing the relative hotness of chili peppers back in 1912. That testing process involves dissolving a precise amount of dried pepper in alcohol so as to extract the capsaicinoids. The capsaicinoids are then diluted in sugar water. A panel of five trained tasters then tastes multiple samples with decreasing concentrations of capsaicinoids until at least three of them can no longer detect the heat in a given sample. The hotness of the pepper is then rated according to its Scoville heat units (SHU).
A few years back, it looked like plastic recycling was set to become a key part of a sustainable future. Then, the price of fossil fuels plunged, making it cheaper to manufacture new plastics. Then China essentially stopped importing recycled plastics for use in manufacturing. With that, the bottom dropped out of plastic recycling, and the best thing you could say for most plastics is that they sequestered the carbon they were made of.
The absence of a market for recycled plastics, however, has also inspired researchers to look at other ways of using them. Two papers this week have looked into processes that enable “upcycling,” or converting the plastics into materials that can be more valuable than the freshly made plastics themselves.
The first paper, done by an international collaboration, actually obtained the plastics it tested from a supermarket chain, so we know it works on relevant materials. The upcycling it describes also has the advantage of working with very cheap, iron-based catalysts. Normally, to break down plastics, catalysts and the plastics are heated together. But in this case, the researchers simply mixed the catalyst and ground up plastics and heated the iron using microwaves.
When it comes to making efficient fuel cells, it’s all about the catalyst. A good catalyst will result in faster, more efficient chemical reactions and, thus, increased energy output. Today’s fuel cells typically rely on platinum-based catalysts. But scientists at American University believe that spinach—considered a “superfood” because it is so packed with nutrients—would make an excellent renewable carbon-rich catalyst, based on their proof-of-principle experiments described in a recent paper published in the journal ACS Omega. Popeye would definitely approve.
The notion of exploiting the photosynthetic properties of spinach has been around for about 40 years now. Spinach is plentiful, cheap, easy to grow, and rich in iron and nitrogen. Many (many!) years ago, as a budding young science writer, I attended a conference talk by physicist Elias Greenbaum (then with Oak Ridge National Labs) about his spinach-related research. Specifically, he was interested in the protein-based “reaction centers” in spinach leaves that are the basic mechanism for photosynthesis—the chemical process by which plants convert carbon dioxide into oxygen and carbohydrates.
There are two types of reaction centers. One type, known as photosystem 1 (PS1), converts carbon dioxide into sugar; the other, photosystem 2 (PS2), splits water to produce oxygen. Most of the scientific interest is in PS1, which acts like a tiny photosensitive battery, absorbing energy from sunlight and emitting electrons with nearly 100-percent efficiency. In essence, energy from sunlight converts water into an oxygen molecule, a positively charged hydrogen ion, and a free electron. These three molecules then combine to form a sugar molecule. PS1s are capable of generating a light-induced flow of electricity in fractions of a second.
It conveys electricity in the climate of a crisp fall day, but only under pressures comparable to what you’d find closer to Earth’s core.
In the period after the discovery of high-temperature superconductors, there wasn’t a good conceptual understanding of why those compounds worked. While there was a burst of progress towards higher temperatures, it quickly ground to a halt, largely because it was fueled by trial and error. Recent years brought a better understanding of the mechanisms that enable superconductivity, and we’re seeing a second burst of rapidly rising temperatures.
The key to the progress has been a new focus on hydrogen-rich compounds, built on the knowledge that hydrogen’s vibrations within a solid help encourage the formation of superconducting electron pairs. By using ultra-high pressures, researchers have been able to force hydrogen into solids that turned out to superconduct at temperatures that could be reached without resorting to liquid nitrogen.
Now, researchers have cleared a major psychological barrier by demonstrating the first chemical that superconducts at room temperature. There are just two catches: we’re not entirely sure what the chemical is, and it only works at 2.5 million atmospheres of pressure.
Right now, electric vehicles are limited by the range that their batteries allow. That’s because recharging the vehicles, even under ideal situations, can’t be done as quickly as refueling an internal combustion vehicle. So far, most of the effort on extending the range has been focused on increasing a battery’s capacity. But it could be just as effective to create a battery that can charge much more quickly, making a recharge as fast and simple as filling your tank.
There are no shortage of ideas about how this might be arranged, but a paper published earlier this week in Science suggests an unusual way that it might be accomplished: using a material called black phosphorus, which forms atom-thick sheets with lithium-sized channels in it. On its own, black phosphorus isn’t a great material for batteries, but a Chinese-US team has figured out how to manipulate it so it works much better. Even if black phosphorus doesn’t end up working out as a battery material, the paper provides some insight into the logic and process of developing batteries.
So, what is black phosphorus? The easiest way to understand it is by comparisons to graphite, a material that’s already in use as an electrode for lithium-ion batteries. Graphite is a form of carbon that’s just a large collection of graphene sheets layered on top of each other. Graphene, in turn, is a sheet formed by an enormous molecule formed by carbon atoms bonded to each other, with the carbons arranged in a hexagonal pattern. In the same way, black phosphorus is composed of many layered sheets of an atom-thick material called phosphorene.
With Crispr, two scientists turned a curiosity of nature into an invention that will transform the human race.
On Wednesday, the Nobel Prize Committee awarded the Chemistry Nobel to Emmanuelle Charpentier and Jennifer Doudna, who made key contributions to the development of the CRISPR gene-editing system, which has been used to produce the first gene-edited humans. This award may spur a bit of controversy, as there were a lot of other contributors to the development of CRISPR (enough to ensure a bitter patent fight), and Charpentier and Doudna’s work was well into the biology side of chemistry. But nobody’s going to argue that the gene editing wasn’t destined for a Nobel Prize.
The history of CRISPR gene editing is a classic story of science: a bunch of people working in a not-especially cutting-edge area of science found something strange. The “something” in this case was an oddity in the genome sequences of a number of bacteria. Despite being very distantly related, the species all had a section of the genome where a set of DNA sequences were repeated, with a short spacer in between them. The sequences picked up the name CRISPR for “clustered regularly interspaced short palindromic repeats,” but nobody knew what they were doing there.
The fact that they might be important became apparent when researchers recognized that bacteria that had CRISPR sequences invariably also had a small set of genes associated with them. Since bacteria tended to rapidly lose genes and repeat sequences that weren’t performing useful functions, this obviously implied some sort of utility. But it took 18 years for someone to notice that the repeated sequences matched those found in the genomes of viruses that infected the bacteria.
Emmanuelle Charpentier and Jennifer A. Doudna developed the Crispr tool, which can change the DNA of animals, plants and microorganisms with high precision.
Archaeologists are fascinated by many different aspects of cultures in the distant past, but determining what ancient people cooked and ate can be particularly challenging. A team of researchers spent an entire year analyzing the chemical residues of some 50 meals cooked in ceramic pots and found such cookware retained not just the remnants of the last meal cooked, but also clues as to earlier meals, spanning a pot’s lifetime of usage. This could give archaeologists a new tool in determining ancient diets. The researchers described their results in a recent paper published in the journal Scientific Reports.
According to co-author Christine Hastorf, an archaeologist at the University of California, Berkeley (UCB), the project has been several years in the making. Hastorf has long been interested in the relationships between people and plants throughout history, particularly as they pertain to what people ate in the past. Back in 1985, she co-authored a paper examining the isotopes of charred plant remains collected from the inside of pots. She has also long taught a food archaeology class at UCB. A few years ago, she expanded the course to two full semesters (nine months), covering both the ethnographic aspects as well as the archaeological methods one might use to glean insight into the dietary habits of the past.
The class was especially intrigued by recent molecular analysis of pottery, yet frustrated by the brevity of the studies done to date on the topic. Hastorf proposed conducting a longer study, and her students responded enthusiastically. So they devised a methodology, assigned research topics to each student, and located places to purchase grain (maize and wheat from the same region of the Midwest), as well as receiving venison in the form of donated deer roadkill. She even bought her own mill so they could grind the grains themselves, setting it up in her home garage.
Last week at TechCrunch Disrupt 2020, I got the chance to speak to Dr. Eric Feigl-Ding, an epidemiologist and health economists who is a Senior Fellow of the Federation of American Scientists. Dr. Feigl-Ding has been a frequent and vocal critic of some of the most profound missteps of regulators, public health organizations and the current White House administration, and we discussed specifically the topic of aerosol transmission and its notable absence from existing guidance in the U.S.
At the time, neither of us knew that the Centers for Disease Control (CDC) would publish updated guidance on its website over this past weekend that provided descriptions of aerosol transmission, and a concession that it’s likely a primary vector for passing on the virus that leads to COVID-19 – or that the CDC would subsequently revert said guidance, removing this updated information about aerosol transmission that’s more in line with the current state of widely accepted COVID research. The CDC cited essentially an issue where someone at the organization pushed a draft version of guidelines to production – but the facts it had shared in the update lined up very closely with what Dr. Feigl-Ding had been calling for.
“The fact that we haven’t highlighted aerosol transmission as much, up until recently, is woefully, woefully frustrating,” he said during our interview last Wednesday. “Other countries who’ve been much more technologically savvy about the engineering aspects of aerosols have been ahead of the curve – like Japan, they assume that this virus is aerosol and airborne. And aerosol means that the droplets are these micro droplets that can float in the air, they don’t get pulled down by gravity […] now we know that the aerosols may actually be the main drivers. And that means that if someone coughs, sings, even breathes, it can in the air, the micro droplets can stay in the air from anywhere from, for stagnant air for up to16 hours, but normally with ventilation, between 20 minutes to four hours. And that air, if you enter it into a room after someone was there, you can still get infected, and that is what makes indoor dining and bars and restaurants so frustrating.”
Dr. Feigl-Ding points to a number of recent contact tracing studies as providing strong evidence that these indoor activities, and the opportunity they provide for aerosol transmission, are leading to a large number of infections. Such studies were featured in a report the CDC prepared on reopening advice, which was buried by the Trump administration according to an AP report from May.
“The latest report shows that indoor dining bars restaurants are the leading leading factors for transmission, once you do contact tracing,” he said, noting that this leads naturally to the big issues around schools reopening, including that many have “very poor ventilation,” while simultaneously they’re not able to open their windows or doors due to gun safety protocols in place. Even before this recent CDC guideline take-back, Dr. Feigl-Ding was clearly frustrated with the way the organization appears to be succumbing to politicization of what is clearly an issue of a large and growing body of scientific evidence and fact.
“The CDC has long been the most respected agency in the world for public health, but now it’s been politically muzzled,” he said. “Previously, for example, the guidelines around church attendance – the CDC advised against church gatherings, but then it was overruled. And it was clearly overruled, because we actually saw it changed in live time. […] In terms of schools, gatherings, it’s clear [that] keeping kids in a pod is not enough, given what we know about ventilation.”
One good trend in 2020 has been large technology companies almost falling over one another to make ever-bolder commitments regarding their ecological impact. A cynic might argue that just doing without most of the things they make could have a much greater impact, but Microsoft is the latest to make a commitment that not only focuses on minimizing its impact, but actually on reversing it. The Windows-maker has committed to achieving a net positive water footprint by 2030, by which it means it wants to be contributing more energy back into environment in the places it operates than it is drawing out, as measured across all “basins” that span its footprint.
Microsoft hopes to achieve this goal through two main types of initiatives: First, it’ll be reducing the “intensity” of its water use across its operations, as measured by the amount of water used per megawatt of energy consumed by the company. Second, it will also be looking to actually replenish water in the areas of the world where Microsoft operations are located in “water-stressed” regions, through efforts like investment in area wetland restoration, or the removal and replacement of certain surfaces, including asphalt, which are not water-permeable and therefore prevent water from natural sources like rainfall from being absorbed back into a region’s overall available basin.
The company says that how much water it will return will vary, and depend on how much Microsoft consumes in each region, as well as how much the local basin is under duress in terms of overall consumption. Microsoft isn’t going to rely solely on external sources for this info, however: It plans to put its artificial intelligence technology to work to provide better information around what areas are under stress in terms of water usage, and where optimization projects would have the greatest impact. It’s already working towards these goals with a number of industry groups, including The Freshwater Trust.
Microsoft has made a number of commitments towards improving its global ecological impact, including a commitment from earlier this year to become ‘carbon negative’ by 2030. Meanwhile, Apple said in July that its products, including the supply chains that produce them, will be net carbon neutral by 2030, while Google made a commitment just last week to use only energy from carbon-free sources by that same year.
There’s nothing quite like the pleasure of sipping a fine Scotch whisky, for those whose tastes run to such indulgences. But how can you be sure that you’re paying for the real deal and not some cheap counterfeit? Good news: physicists at the University of St. Andrews in Scotland have figured out how to test the authenticity of bottles of fine Scotch whisky using laser light, without ever having to open the bottles. They described their work in a recent paper published in the journal Analytical Methods.
As we reported last year, there is an exploding demand for expensive rare whiskies—yes, even in the middle of a global pandemic—so naturally there has been a corresponding increase in the number of counterfeit bottles infiltrating the market. A 2018 study subjected 55 randomly selected bottles from auctions, private collectors, and retailers to radiocarbon dating and found that 21 of them were either outright fakes or not distilled in the year claimed on the label.
Ten of those fakes were supposed to be single-malt scotches from 1900 or earlier, prompting Rare Whisky 101 cofounder David Robertson to publicly declare, “It is our genuine belief that every purported pre-1900 bottle should be assumed fake until proven genuine, certainly if the bottle claims to be a single malt Scotch whisky.” There’s also an influx of counterfeit cheaper whiskies seeping into the markets, which could pose an even greater challenge, albeit less of a headline-grabbing one.
Better batteries are a critical enabling technology for everything from your gadgets all the way up to the stability of an increasingly renewable grid. But most of the obvious ways of squeezing more capacity into a battery have been tried, and they all run straight into problems. While there may be ways to solve those problems, they’re going to need a lot of work to overcome those hurdles.
Earlier this week, a paper covers a new electrode material that seems to avoid the problems that have plagued other approaches to expanding battery capacity. And it’s a remarkably simple material: a variation on the same structure that’s formed by crystals of table salt. While it’s far from being ready to throw in a battery, the early data definitely indicate it’s worth looking into further.
Lithium-ion batteries, as their name implies, involve shuffling lithium between the cathode and the anode of the battery. The consequence of this is that both of the electrodes will end up needing to store lithium atoms. So most ideas for next-generation batteries involve finding electrode materials that do so more effectively.
The liquid levitates, and a boat floats along its bottom side.
Sour beer has been around for centuries, and has become a favorite with craft brewers in recent years. But the brewing process can be unpredictable. To help brewers better understand how sour beers develop their distinctive complex flavors, chemists at the University of Redlands in California have been tracking various chemical compounds that contribute to those flavor profiles, monitoring how their concentrations change over time during the aging process. They presented their initial findings during the American Chemical Society’s Fall 2020 Virtual Meeting & Expo last week.
Brewers of standard beer carefully control the strains of yeast they use, taking care to ensure other microbes don’t sneak into the mix, lest they alter the flavor during fermentation. Sour beer brewers use wild yeasts, letting them grow freely in the wort, sometimes adding fruit for a little extra acidity. Then the wort is transferred to wooden barrels and allowed to mature for months or sometimes years, as the microbes produce various metabolic products that contribute to sour beer’s unique flavor. But the brewers don’t always know exactly which compounds end up in the final product or how it will impact the overall flavor profile. “That is the quandary of the sour beer brewer,” said co-author David Soulsby during a virtual press conference.
“Sour beer tastes very different from regular beer, but it’s a very complex and rich flavor experience. These different flavors come from the complex processes that are occurring during aging,” said co-author Teresa Longin, who also happens to be married to Soulsby. “These processes are hard to control and can be hard to reproduce. Our research focuses on understanding what these processes are, what’s happening over time, so that the brewer can ultimately understand them and make better beer.”
Inkjet printing of two-dimensional crystals will be crucial for ushering in the next generation of printed electronics. While the technology has made a lot of progress in recent years, a major challenge to industrial-scale printed electronic components is achieving uniform distribution of the crystals; uneven distribution can result in faulty devices. The culprit is a phenomenon known as the “coffee ring effect.” Now scientists have created a new family of inks that can suppress the effect, according to a new paper in the journal Science Advances.
Coffee rings are the pattern you get when a liquid evaporates and leaves behind a ring of previously dissolved solids—coffee grounds in the case of your morning cup of joe, 2D crystals in the case of inkjet printing of electrical components. (You can also see the effect with single-malt scotch. A related phenomena is wine tears.) The coffee ring effect occurs when a single liquid evaporates and the solids that had been dissolved in the liquid (like coffee grounds or 2D crystals) form a telltale ring. It happens because the evaporation occurs faster at the edge than at the center. Any remaining liquid flows outward to the edge to fill in the gaps, dragging those solids with it. Mixing in solvents (water or alcohol) reduces the effect, as long as the drops are very small. Large drops produce more uniform stains.
Similarly, when a drop of watercolor paint dries, the pigment particles of color break outward, toward the rim of the drop. So artists who work with watercolors also have to deal with the coffee ring effect if they don’t want that accumulation of pigment at the edges to happen. As we reported in 2018, adding alcohol to the watercolor paint can prevent it. Alternatively, an artist may wet the paper before applying the paint. Instead of the drop remaining pinned to the paper, the ink runs off. This allows the artist to play with various effects, such as generating unusual color gradients.
Trace quantities of isotopes hint at the true origin of a kind of glass that was highly prized in the Roman Empire.
Over the last eight years, conservationists have been meticulously restoring the famed Ghent altarpiece housed in Belgium’s St Bavo’s Cathedral. With the help of several advanced imaging techniques, they’ve been able to identify where overpainting from earlier restorations obscured the original work. Researchers at the University of Antwerp and the National Gallery of Art in Washington, DC, have published a new paper in the journal Science Advances demonstrating how combining different techniques greatly improved their analysis, revealing previously unknown revisions to the Lamb of God figure in the inner central panel.
The Ghent Altarpiece—aka the Adoration of the Mystic Lamb—is a 15th-century polyptych attributed to brothers Hubert and Jan van Eyck. Originally consisting of 12 panels, the altarpiece features two “wings” of four panels each, painted on both sides. Those wings were opened on church feast days so congregants could view the interior four central panels. The inner upper register features Christ the King, the Virgin Mary, and John the Baptist, flanked by the outer panels depicting angels and the figures of Adam and Eve. The inner lower register depicts John the Baptist and St. John the Evangelist. The Adoration of the Lamb comprises the center panel, featuring the Lamb of God standing on an altar in a meadow surrounded by angels, with groups of martyrs, saints, and prophets congregating around the altar.
The first significant restoration was done in 1550 to repair damage from an earlier cleaning. It was cleaned again in 1662 by the Flemish painter Antoon van den Heuvel. After the altarpiece was damaged while being stored in Austrian mines during World War II, another restoration was done in the 1950s, making use of X-ray radiography (XRR) to aid in those efforts. Specifically, the researchers imaged tiny paint samples from the cross section of the altarpiece, yielding useful information about areas that had been over-painted during the earlier restoration, obscuring the original Eyckian work—including the Lamb’s head.
The drop in battery prices is enabling battery integration with renewable systems in two contexts. In one, the battery serves as a short-term power reservoir to smooth over short-term fluctuations in the output of renewable power. In the other, the battery holds the power for when renewable power production stops, as solar power does at night. This works great for off-grid use, but it adds some complications in the form of additional hardware to convert voltages and current.
But there’s actually an additional option, one that merges photovoltaic and battery hardware in a single, unified device that can have extensive storage capacity. The main drawback? The devices have either been unstable or have terrible efficiency. But an international team of researchers has put together a device that’s both stable and has efficiencies competitive with those of silicon panels.
How do you integrate photovoltaic cells and batteries? At its simplest, you make one of the electrodes that pulls power out of the photovoltaic system into the electrode of a battery. Which sounds like a major “well, duh!” But integration is nowhere near that simple. Battery electrodes, after all, have to be compatible with the chemistry of the battery—for lithium-ion batteries, for example, the electrodes end up storing the ions themselves and so have to have a structure that allows that.