Diesels pollute more than lab tests detect

Because of testing
inefficiencies, maintenance inadequacies and other factors, cars, trucks and
buses worldwide emit 4.6 million tons more harmful nitrogen oxide (NOx) than
standards allow, according to a new study co-authored by University of Colorado
Boulder researchers.
 
The study, published in Nature,
shows these excess emissions alone lead to 38,000 premature deaths annually
worldwide, including 1,100 deaths in the United States.
 
The findings reveal major
inconsistencies between what vehicles emit during testing and what they emit in
the real world – a problem that’s far more severe, said the researchers, than
the incident in 2015, when federal regulators discovered Volkswagen had been
fitting millions of new diesel cars with “defeat devices.”
 
Red Diesel Tank, by Meena Kadri [CC BY 2.0 (http://ift.tt/o655VX)], via Wikimedia Commons
The devices sense when a vehicle
is undergoing testing and reduce emissions to comply with government standards.
Excess emissions from defeat devices have been linked to about 50 to 100 U.S.
deaths per year, studies show.
 
“A lot of attention has been
paid to defeat devices, but our work emphasizes the existence of a much larger
problem,” said Daven Henze, an associate professor of mechanical
engineering at CU Boulder who, along with postdoctoral researcher Forrest
Lacey, contributed to the study. “It shows that in addition to tightening
emissions standards, we need to be attaining the standards that already exist
in real-world driving conditions.”
 
The research was conducted in
partnership with the International Council on Clean Transportation, a
Washington, D.C.-based nonprofit organization, and Environmental Health
Analytics LLC.
 
For the paper, the researchers
assessed 30 studies of vehicle emissions under real-world driving conditions in
11 major vehicle markets representing 80 percent of new diesel vehicle sales in
2015. Those markets include Australia, Brazil, Canada, China, the European
Union, India, Japan, Mexico, Russia, South Korea and the United States.
 
They found that in 2015, diesel
vehicles emitted 13.1 million tons of NOx, a chemical precursor to particulate
matter and ozone. Exposure in humans can lead to heart disease, stroke, lung
cancer and other health problems. Had the emissions met standards, the vehicles
would have emitted closer to 8.6 million tons of NOx.
 
Heavy-duty vehicles, such as
commercial trucks and buses, were by far the largest contributor worldwide,
accounting for 76 percent of the total excess NOx emissions.
 
Henze used computer modeling and
NASA satellite data to simulate how particulate matter and ozone levels are,
and will be, impacted by excess NOx levels in specific locations. The team then
computed the impacts on health, crops and climate.
 
“The consequences of excess
diesel NOx emissions for public health are striking,” said Susan Anenberg,
co-lead author of the study and co-founder of Environmental Health Analytics
LLC.
 
China suffers the greatest health
impact with 31,400 deaths annually attributed to diesel NOx pollution, with
10,700 of those deaths linked to excess NOx emissions beyond certification
limits. In Europe, where diesel-passenger cars are common, 28,500 deaths
annually are attributed to diesel NOx pollution, with 11,500 of those deaths
linked to excess emissions.
 
The study projects that by 2040,
183,600 people will die prematurely each year due to diesel vehicle NOx
emissions unless governments act.
 
The authors say emission
certification tests, both prior to sale and by vehicle owners, could be more
accurate if they were to simulate a broader variety of speeds, driving styles
and ambient temperatures. Some European countries now use portable testing
devices that track emissions of a car in motion.
 
“Tighter vehicle emission
standards coupled with measures to improve real-world compliance could prevent
hundreds of thousands of early deaths from air pollution-related diseases each
year,” said Anenberg.
 
For more information, visit:-
 

 

Read more

On this day in science history: the Hindenburg Zeppelin arrived at Lakehurst, New Jersey, USA

In 1936, the Hindenburg
Zeppelin arrived at Lakehurst, New Jersey, USA, from Germany marking the
beginning of a regular transatlantic passenger service. The flight, carrying 51
passengers and 56 crew, took 61 hours.
Hindenburg at Lakehurst, by U.S. Department of the Navy. Bureau of Aeronautics. Naval Aircraft Factory, Philadelphia, Pennsylvania (USA). [Public domain], via Wikimedia Commons
The Hindenburg was a large
German commercial passenger-carrying rigid airship, the lead ship of the
Hindenburg class, the longest class of flying machine and the largest airship
by envelope volume. It was designed and built by the Zeppelin Company
(Luftschiffbau Zeppelin GmbH) on the shores of Lake Constance in
Friedrichshafen and was operated by the German Zeppelin Airline Company
(Deutsche Zeppelin-Reederei). The Hindenburg had a duralumin structure,
incorporating 15 Ferris wheel-like bulkheads along its length, with 16 cotton
gas bags fitted between them. The bulkheads were braced to each other by
longitudinal girders placed around their circumferences. The airship’s outer
skin was of cotton doped with a mixture of reflective materials intended to
protect the gas bags within from radiation, both ultraviolet (which would
damage them) and infrared (which might cause them to overheat). The gas cells
were made by a new method pioneered by Goodyear using multiple layers of
gelatinized latex rather than the previous goldbeater’s skins. In 1931 the
Zeppelin Company purchased 5,000 kg (11,000 lb) of duralumin salvaged from the
wreckage of the October 1930 crash of the British airship R101, which might
have been re-cast and used in the construction of the Hindenburg.
 
The interior furnishings of
the Hindenburg were designed by Fritz August Breuhaus, whose design experience
included Pullman coaches, ocean liners, and warships of the German Navy. The
upper “A” Deck contained small passenger quarters in the middle
flanked by large public rooms: a dining room to port and a lounge and writing
room to starboard. Paintings on the dining room walls portrayed the Graf
Zeppelin’s trips to South America. A stylized world map covered the wall of the
lounge. Long slanted windows ran the length of both decks. The passengers were
expected to spend most of their time in the public areas, rather than their
cramped cabins.
 
The lower “B” Deck
contained washrooms, a mess hall for the crew, and a smoking lounge. Harold G.
Dick, an American representative from the Goodyear Zeppelin Company, recalled
“The only entrance to the smoking room, which was pressurized to prevent
the admission of any leaking hydrogen, was via the bar, which had a swiveling
air lock door, and all departing passengers were scrutinized by the bar steward
to make sure they were not carrying out a lit cigarette or pipe.”
 
Helium was initially selected
for the Hindenburg’s lifting gas because it was the safest to use in airships,
as it is not flammable. One proposed measure to save helium was to make
double-gas cells for 14 of the 16 gas cells; an inner hydrogen cell would be
protected by an outer cell filled with helium, with vertical ducting to the
dorsal area of the envelope to permit separate filling and venting of the inner
hydrogen cells. At the time, however, helium was also relatively rare and
extremely expensive as the gas was only available in industrial quantities from
distillation plants at certain oil fields in the United States. Hydrogen, by
comparison, could be cheaply produced by any industrialized nation and being
lighter than helium also provided more lift. Because of its expense and rarity,
American rigid airships using helium were forced to conserve the gas at all
costs and this hampered their operation.
 
Despite a U.S. ban on the
export of helium under the Helium Control Act of 1927, the Germans designed the
airship to use the far safer gas in the belief that they could convince the US
government to license its export. When the designers learned that the National
Munitions Control Board would refuse to lift the export ban, they were forced
to re-engineer the Hindenburg to use hydrogen for lift. Despite the danger of
using flammable hydrogen, no alternative lighter-than-air gases could provide
sufficient lift. One beneficial side effect of employing hydrogen was that more
passenger cabins could be added. The Germans’ long history of flying
hydrogen-filled passenger airships without a single injury or fatality
engendered a widely held belief they had mastered the safe use of hydrogen. The
Hindenburg’s first season performance appeared to demonstrate this, however the
airship was destroyed by fire 14 months later on May 6, 1937, at the end of the
first North American transatlantic journey of its second season of service.
Thirty-six people died in the accident, which occurred while landing at
Lakehurst. This was the last of the great airship disasters; it was preceded by the
crashes of the British R38 in 1921 (44 dead), the US airship Roma in 1922 (34 dead),
the French Dixmude in 1923 (52 dead), the British R101 in 1930 (48 dead), and
the US Akron in 1933 (73 dead).

 

 
For more information visit:-
 
Read more

Mice with missing lipid-modifying enzyme heal better after heart attack

Two immune responses are important for recovery after a heart attack – an acute inflammatory response that attracts leukocyte immune cells to remove dead tissue, followed by a resolving response that allows healing.
 
The human heart by Patrick J. Lynch, medical illustrator (Patrick J. Lynch, medical illustrator) [CC BY 2.5 (http://ift.tt/OEA2JO)], via Wikimedia Commons
Failure of the resolving response can allow a persistent, low-grade nonresolving inflammation, which can lead to progressive acute or chronic heart failure. Despite medical advances, 2 to 17 percent of patients die within one year after a heart attack due to failure to resolve inflammation. More than 50 percent die within five years.
 
Using a mouse heart attack model, Ganesh Halade, Ph.D., and his University of Alabama at Birmingham colleagues have shown that knocking out one particular lipid-modifying enzyme, along with a short-term dietary excess of a certain lipid, can improve post-heart attack healing and clear inflammation. Halade, an assistant professor in the UAB Department of Medicine, hopes that future physicians will be able to use knowledge from studies like his to boost healing in patients after heart attacks and prevent heart failure.
 
“Our goal is healing, and we are reaching that goal,” he said of efforts in the UAB Division of Cardiovascular Medicine.
 
Why are lipids and lipid-modifying enzymes important in inflammation and resolving inflammation? Three key lipid modifying enzymes in the body change the lipids into various signaling agents. Some of these signaling agents regulate the triggering of inflammation, and others promote the reparative pathway.
 
The lipids modified by the enzymes are two types of essential fatty acids that come from food, since mammals cannot synthesize them. One is n-6 or omega-6 fatty acids, and the other type is n-3 or omega-3 fatty acids. The balance of these two types is important.
 
The Mediterranean diet, with a near balance of omega-3 and omega-6 fatty acids, promotes heart health. The Western diet, with large amounts of omega-6 fatty acids that greatly exceed the levels of omega-3 fatty acids, can lead to heart disease.
 
The three main lipid-modifying enzymes compete with each other to modify whatever fatty acids are available from the diet. So, Halade and colleagues asked, what will happen if we knock out one of the key enzymes, the 12/15 lipoxygenase?
 
They reasoned that this would increase the metabolites produced by the other two main enzymes, cyclooxygenase and cytochrome P450 because they no longer had to compete with 12/15 lipoxygenase for lipids to modify. This might be a benefit because those signaling lipids produced through the cyclooxygenase and cytochrome P450 pathways were already known to lead to major resolution promotion factors for post-heart attack healing.
 
The UAB researchers found that knocking out the 12/15 lipoxygenase and feeding the mice a short-term excess of polyunsaturated fatty acids led to increased leukocyte clearance after experimental heart attack, meaning less chronic inflammation. It also improved heart function, increased the levels of bioactive lipids during the reparative phase of healing, and led to higher levels of reparative cytokine markers. Additionally, the heart muscle showed less of the fibrosis that is a factor in heart failure.
 
Besides congestive heart failure, persistent inflammation aggravates a vicious cycle in many cardiovascular diseases, including atherogenesis, atheroprogression, atherosclerosis and peripheral artery disease.
 
Halade says further mechanistic studies are warranted to develop novel targets for treatment and to find therapies that support the onset of left ventricle healing and prevent heart failure pathology.
 
For more information visit:-
 

 

Read more

On this day in science history: Pioneer 10 crossed the orbit of Pluto

In 1983, Pioneer 10, an
American space probe, crossed the orbit of Pluto, the outermost planet, to
continue its voyage into the universe beyond our solar system. This space
exploration project was conducted by the NASA Ames Research Center in
California, and the space probe was manufactured by TRW Inc.
 
Pioneer 10 was launched on
March 2, 1972, by an Atlas-Centaur expendable vehicle from Cape Canaveral,
Florida. Between July 15, 1972, and February 15, 1973, it became the first
spacecraft to traverse the asteroid belt. Photography of Jupiter began on November
6, 1973, at a range of 25,000,000 kilometres (16,000,000 mi), and a total of
about 500 images were transmitted. The closest approach to the planet was on
December 4, 1973, at a range of 132,252 kilometres (82,178 mi). During the
mission, the on-board instruments were used to study the asteroid belt, the
environment around Jupiter, the solar wind, cosmic rays, and eventually the far
reaches of the Solar System and heliosphere.
 
Artist’s impression of Pioneer 10’s flyby of Jupiter, by Rick Guidice [Public domain], via Wikimedia Commons
So, what do we know about
Jupiter?
 
Jupiter is the fifth planet
from the Sun and the largest in the Solar System. It is a giant planet with a
mass one-thousandth that of the Sun, but two and a half times that of all the
other planets in the Solar System combined. Jupiter and Saturn are gas giants;
the other two giant planets, Uranus and Neptune are ice giants. Jupiter has
been known to astronomers since antiquity. The Romans named it after their
god Jupiter. When viewed from Earth, Jupiter can reach an apparent
magnitude of −2.94, bright enough for its reflected light to cast shadows, and making it on average the third-brightest object in the night sky after the
Moon and Venus.
 
Jupiter is primarily composed
of hydrogen with a quarter of its mass being helium, though helium comprises
only about a tenth of the number of molecules. It may also have a rocky core of
heavier elements, but like the other giant planets, Jupiter lacks a
well-defined solid surface. Because of its rapid rotation, the planet’s shape
is that of an oblate spheroid (it has a slight but noticeable bulge around the
equator). The outer atmosphere is visibly segregated into several bands at
different latitudes, resulting in turbulence and storms along their interacting
boundaries. A prominent result is the Great Red Spot, a giant storm that is
known to have existed since at least the 17th century when it was first seen by
telescope. Surrounding Jupiter is a faint planetary ring system and a powerful
magnetosphere. Jupiter has at least 67 moons, including the four large Galilean
moons discovered by Galileo Galilei in 1610. Ganymede, the largest of these,
has a diameter greater than that of the planet Mercury.
 
Radio communications were lost
with Pioneer 10 on January 23, 2003, because of the loss of electric power for
its radio transmitter, with the probe at a distance of 12 billion kilometers
(80 AU) from Earth.
 
Jupiter has been explored on
several other occasions by robotic spacecraft, such as the Voyager flyby
missions and later, the Galileo orbiter. In late February 2007, Jupiter was
visited by the New Horizons probe, which used Jupiter’s gravity to increase its
speed and bend its trajectory en route to Pluto. The latest probe to visit the
planet is Juno, which entered into orbit around Jupiter on July 4, 2016. Future
targets for exploration in the Jupiter system include the probable ice-covered
liquid ocean of its moon Europa.
 
For more information visit:-
 

 

 

 

Read more

Mission control: salty diet makes you hungry, not thirsty

We’ve all heard it: eating salty
foods makes you thirstier. But what sounds like good nutritional advice turns
out to be an old-wives’ tale. In a study carried out during a simulated mission
to Mars, an international group of scientists has found exactly the opposite to
be true. “Cosmonauts” who ate more salt retained more water, weren’t
as thirsty, and needed more energy.
 
Salt shaker, by Dubravko Sorić SoraZG on Flickr [CC BY 2.0 (http://ift.tt/o655VX)], via Wikimedia Commons
For some reason, no one had ever
carried out a long-term study to determine the relationship between the amount
of salt in a person’s diet and his drinking habits. Scientists have known that
increasing a person’s salt intake stimulates the production of more urine – it
has simply been assumed that the extra fluid comes from drinking. Not so fast!
say researchers from the German Aerospace Center (DLR), the Max Delbrück Center
for Molecular Medicine (MDC), Vanderbilt University and colleagues around the
world. Recently they took advantage of a simulated mission to Mars to put the
old adage to the test. Their conclusions appear in two papers in the current
issue of The Journal of Clinical Investigation.
 
What does salt have to do with
Mars? Nothing, really, except that on a long space voyage conserving every drop
of water might be crucial. A connection between salt intake and drinking could
affect your calculations – you wouldn’t want an interplanetary traveler to die
because he liked an occasional pinch of salt on his food. The real interest in
the simulation, however, was that it provided an environment in which every
aspect of a person’s nutrition, water consumption, and salt intake could be
controlled and measured.
 
The studies were carried out by
Natalia Rakova (MD, PhD) of the Charité and MDC and her colleagues. The
subjects were two groups of 10 male volunteers sealed into a mock spaceship for
two simulated flights to Mars. The first group was examined for 105 days; the
second over 205 days. They had identical diets except that over periods lasting
several weeks, they were given three different levels of salt in their food.
 
The results confirmed that eating
more salt led to a higher salt content in urine – no surprise there. Nor was
there any surprise in a correlation between amounts of salt and overall
quantity of urine. But the increase wasn’t due to more drinking – in fact, a
salty diet caused the subjects to drink less. Salt was triggering a mechanism
to conserve water in the kidneys.
 
Before the study, the prevailing
hypothesis had been that the charged sodium and chloride ions in salt grabbed
onto water molecules and dragged them into the urine. The new results showed
something different: salt stayed in the urine, while water moved back into the
kidney and body. This was completely puzzling to Prof. Jens Titze, MD of the
University of Erlangen and Vanderbilt University Medical Center and his
colleagues. “What alternative driving force could make water move
back?” Titze asked.
 
Experiments in mice hinted that
urea might be involved. This substance is formed in muscles and the liver as a
way of shedding nitrogen. In mice, urea was accumulating in the kidney, where
it counteracts the water-drawing force of sodium and chloride. But synthesizing
urea takes a lot of energy, which explains why mice on a high-salt diet were
eating more. Higher salt didn’t increase their thirst, but it did make them
hungrier. Also the human “cosmonauts” receiving a salty diet
complained about being hungry.
The project revises scientists’
view of the function of urea in our bodies. “It’s not solely a waste
product, as has been assumed,” Prof. Friedrich C. Luft, MD of the Charité
and MDC says. “Instead, it turns out to be a very important osmolyte – a
compound that binds to water and helps transport it. Its function is to keep
water in when our bodies get rid of salt. Nature has apparently found a way to
conserve water that would otherwise be carried away into the urine by
salt.”
 
The new findings change the way
scientists have thought about the process by which the body achieves water
homeostasis – maintaining a proper amount and balance. That must happen whether
a body is being sent to Mars or not. “We now have to see this process as a
concerted activity of the liver, muscle and kidney,” says Jens Titze.
 
“While we didn’t directly
address blood pressure and other aspects of the cardiovascular system, it’s
also clear that their functions are tightly connected to water homeostasis and
energy metabolism.”
 
For more information visit:-

 

Read more

The chemistry behind the new one pound coin

We all know that money makes the world go around, but do you know what goes into it? The new pound coin arrived on 28th March, largely as a preventative measure against counterfeiting.  Take a look at the graphic below for more information about its composition.
 
Source: Compound Interest
 
Why the new coin is harder to counterfeit
  1. 12-sided – its distinctive shape means it stands out by sight and by touch
  2. Bimetallic – The outer ring is gold coloured (nickel-brass) and the inner ring is silver coloured (nickel-plated alloy)
  3. Latent image – it has an image like a hologram that changes from a ‘£’ symbol to the number ‘1’ when the coin is seen from different angles
  4. Micro-lettering – around the rim on the heads side of the coin tiny lettering reads: ONE POUND. On the tails side you can find the year the coin was produced
  5. Milled edges – it has grooves on alternate sides
  6. Hidden high security feature – an additional security feature is built into the coin to protect it from counterfeiting but details have not been revealed

 

For more information, visit: 
 
 

 

Read more

New device produces hydrogen peroxide for water purification

Limited access to clean water is a major issue for billions of people in the developing world, where water sources are often contaminated with urban, industrial and agricultural waste. Many disease-causing organisms and organic pollutants can be quickly removed from water using hydrogen peroxide without leaving any harmful residual chemicals. However, producing and distributing hydrogen peroxide is a challenge in many parts of the world.
 
Purified drinking water
Now scientists at the Department of Energy’s SLAC National Accelerator Laboratory and Stanford University have created a small device for hydrogen peroxide production that could be powered by renewable energy sources, like conventional solar panels.
 
“The idea is to develop an electrochemical cell that generates hydrogen peroxide from oxygen and water on site, and then use that hydrogen peroxide in groundwater to oxidize organic contaminants that are harmful for humans to ingest,” said Chris Hahn, a SLAC associate staff scientist.
 
Their results were reported March 1 in Reaction Chemistry and Engineering.
The project was a collaboration between three research groups at the SUNCAT Center for Interface Science and Catalysis, which is jointly run by SLAC and Stanford University.
 
“Most of the projects here at SUNCAT follow a similar path,” said Zhihua (Bill) Chen, a graduate student in the group of Tom Jaramillo, an associate professor at SLAC and Stanford. “They start from predictions based on theory, move to catalyst development and eventually produce a prototype device with a practical application.”
 
In this case, researchers in the theory group led by SLAC/Stanford Professor Jens Nørskov used computational modeling, at the atomic scale, to investigate carbon-based catalysts capable of lowering the cost and increasing the efficiency of hydrogen peroxide production. Their study revealed that most defects in these materials are naturally selective for generating hydrogen peroxide, and some are also highly active. Since defects can be naturally formed in the carbon-based materials during the growth process, the key finding was to make a material with as many defects as possible.
 
“My previous catalyst for this reaction used platinum, which is too expensive for decentralized water purification,” said research engineer Samira Siahrostami. “The beautiful thing about our cheaper carbon-based material is that it has a huge number of defects that are active sites for catalyzing hydrogen peroxide production.”
 
Stanford graduate student Shucheng Chen, who works with Stanford Professor Zhenan Bao, then prepared the carbon catalysts and measured their properties. With the help of SSRL staff scientists Dennis Nordlund and Dimosthenis Sokaras, these catalysts were also characterized using X-rays at SLAC’s Stanford Synchrotron Radiation Lightsource (SSRL), a DOE Office of Science User Facility.
 
“We depended on our experiments at SSRL to better understand our material’s structure and check that it had the right kinds of defects,” Shucheng Chen said.
 
Finally, he passed the catalyst along to his roommate Bill Chen, who designed, built and tested their device.
 
“Our device has three compartments,” Bill Chen explained. “In the first chamber, oxygen gas flows through the chamber, interfaces with the catalyst made by Shucheng and is reduced into hydrogen peroxide. The hydrogen peroxide then enters the middle chamber, where it is stored in a solution.” In a third chamber, another catalyst converts water into oxygen gas, and the cycle starts over.
 
Separating the two catalysts with a middle chamber makes the device cheaper, simpler and more robust than separating them with a standard semi-permeable membrane, which can be attacked and degraded by the hydrogen peroxide.
 
The device can also run on renewable energy sources available in villages. The electrochemical cell is essentially an electrical circuit that operates with a small voltage applied across it. The reaction in chamber one puts electrons into oxygen to make hydrogen peroxide, which is balanced by a counter reaction in chamber three that takes electrons from water to make oxygen – matching the current and completing the circuit. Since the device requires only about 1.7 volts applied between the catalysts, it can run on a battery or two standard solar panels.
 
The research groups are now working on a higher-capacity device.
 
Currently the middle chamber holds only about 10 microliters of hydrogen peroxide; they want to make it bigger. They’re also trying to continuously circulate the liquid in the middle chamber to rapidly pump hydrogen peroxide out, so the size of the storage chamber no longer limits production.
 
They would also like to make hydrogen peroxide in higher concentrations. However, only a few milligrams are needed to treat one liter of water, and the current prototype already produces a sufficient concentration, which is one-tenth the concentration of the hydrogen peroxide that you buy at the store for your basic medical needs.
 
In the long term, the team wants to change the alkaline environment inside the cell to a neutral one that’s more like water. This would make it easier for people to use, because the hydrogen peroxide could be mixed with drinking water directly without having to neutralize it first.
 
The team members are excited about their results and feel they are on the right track to developing a practical device.
 
“Currently it’s just a prototype, but I personally think it will shine in the area of decentralized water purification for the developing world,” said Bill Chen. “It’s like a magic box. I hope it can become a reality.”
 
For more information, visit:-
 
 

 

Read more

On this day in science history: polyethylene was discovered

Polyethylene was first synthesized by the German chemist Hans von Pechmann, who prepared it by accident in 1898 while investigating diazomethane. When his colleagues Eugen Bamberger and Friedrich Tschirner characterized the white, waxy substance that he had created, they recognized that it contained long –CH2– chains and termed it polymethylene.
 
Polythylene balls, by Lluis tgn (Own work) [CC BY-SA 3.0 (http://ift.tt/HKkdTz) or GFDL (http://ift.tt/KbUOlc)], via Wikimedia Commons
The first industrially practical polyethylene synthesis (diazomethane is a notoriously unstable substance that is generally avoided in industrial application) was discovered in 1933 by Eric Fawcett and Reginald Gibson, again by accident, at the Imperial Chemical Industries (ICI) works in Northwich, England.  Upon applying extremely high pressure (several hundred atmospheres) to a mixture of ethylene and benzaldehyde they again produced a white, waxy material. Because the reaction had been initiated by trace oxygen contamination in their apparatus, the experiment was, at first, difficult to reproduce. It was not until 1935 that another ICI chemist, Michael Perrin, developed this accident into a reproducible high-pressure synthesis for polyethylene that became the basis for industrial LDPE production beginning in 1939. Because polyethylene was found to have very low-loss properties at very high frequency radio waves, commercial distribution in Britain was suspended on the outbreak of World War II, secrecy imposed, and the new process was used to produce insulation for UHF and SHF coaxial cables of radar sets. During World War II, further research was done on the ICI process and in 1944 Bakelite Corporation at Sabine, Texas, and Du Pont at Charleston, West Virginia, began large-scale commercial production under license from ICI.
 
The breakthrough landmark in the commercial production of polyethylene began with the development of catalyst that promote the polymerization at mild temperatures and pressures. The first of these was a chromium trioxide–based catalyst discovered in 1951 by Robert Banks and J. Paul Hogan at Phillips Petroleum. In 1953 the German chemist Karl Ziegler developed a catalytic system based on titanium halides and organoaluminium compounds that worked at even milder conditions than the Phillips catalyst. The Phillips catalyst is less expensive and easier to work with, however, and both methods are heavily used industrially. By the end of the 1950s both the Phillips- and Ziegler-type catalysts were being used for HDPE production. In the 1970s, the Ziegler system was improved by the incorporation of magnesium chloride. Catalytic systems based on soluble catalysts, the metallocenes, were reported in 1976 by Walter Kaminsky and Hansjörg Sinn. The Ziegler- and metallocene-based catalysts families have proven to be very flexible at copolymerizing ethylene with other olefins and have become the basis for the wide range of polyethylene resins available today, including very low density polyethylene and linear low-density polyethylene. Such resins, in the form of UHMWPE fibers, have (as of 2005) begun to replace aramids in many high-strength applications.
 
One of the main problems of polyethylene is that without special treatment it’s not readily biodegradable, and thus accumulates. In Japan, getting rid of plastics in an environmentally friendly way was the major problem discussed until the Fukushima disaster in 2011. It was listed as a $90 billion market for solutions. Since 2008, Japan has rapidly increased the recycling of plastics, but still has a large amount of plastic wrapping which goes to waste.
 
In May 2008, Daniel Burd, a 16-year-old Canadian, won the Canada-Wide Science Fair in Ottawa after discovering that Pseudomonas fluorescens, with the help of Sphingomonas, can degrade over 40% of the weight of plastic bags in less than three months.
 
The thermophilic bacterium Brevibacillus borstelensis (strain 707) was isolated from a soil sample and found to use low-density polyethylene as a sole carbon source when incubated together at 50°C. Biodegradation increased with time exposed to ultraviolet radiation.
 
In 2010, a Japanese researcher, Akinori Ito, released the prototype of a machine which creates oil from polyethylene using a small, self-contained vapor distillation process.
 
In 2014, a Chinese researcher discovered that Indian mealmoth larvae could metabolize polyethylene from observing that plastic bags at his home had small holes in them. Deducing that the hungry larvae must have digested the plastic somehow, he and his team analyzed their gut bacteria and found a few that could use plastic as their only carbon source. Not only could the bacteria from the guts of the Plodia interpunctella moth larvae metabolize polyethylene, they degraded it significantly, dropping its tensile strength by 50%, its mass by 10% and the molecular weights of its polymeric chains by 13%.
 
For more information visit:-
 

 

Read more

Why water splashes: New theory reveals secrets

New research from the University of Warwick generates fresh insight into how a raindrop or spilt coffee splashes.
 
Dr James Sprittles from the Mathematics Institute has created a new theory to explain exactly what happens – in the tiny space between a drop of water and a surface – to cause a splash.
 
Water splash
 
When a drop of water falls, it is prevented from spreading smoothly across a surface by a microscopically thin layer of air that it can’t push aside – so instead of wetting the surface, parts of the liquid fly off, and a splash is generated.
 
A layer of air 1 micron in size – fifty times smaller than the width of a human hair – can obstruct a 1mm drop of water which is one thousand times larger.
 
This is comparable to a 1cm layer of air stopping a tsunami wave spreading across a beach.
 
Dr Sprittles has established exactly what happens to this miniscule layer of air during the super-fast action by developing a new theory, capturing its microscopic dynamics – factoring in different physical conditions, such as liquid viscosity and air pressure, to predict whether splashes will occur or not.
 
The lower the air pressure, the easier the air can escape from the squashed layer – giving less resistance to the water drop – enabling the suppression of splashes. This is why drops are less likely to splash at the top of mountains, where the air pressure is reduced.
 
Understanding the conditions that cause splashing enables researchers to find out how to prevent it – leading to potential breakthroughs in various fields.
 
In 3D printing, liquid drops can form the building blocks of tailor-made products such as hearing aids; stopping splashing is key to making products of the desired quality.
 
Splashes are also a crucial part of forensic science – whether blood drops have splashed or not provides insight into where they came from, which can be vital information in a criminal investigation.
 
Dr Sprittles comments:
 
“You would never expect a seemingly simple everyday event to exhibit such complexity. The air layer’s width is so small that it is similar to the distance air molecules travel between collisions, so that traditional models are inaccurate and a microscopic theory is required.
 
“Most promisingly, the new theory should have applications to a wide range of related phenomena, such as in climate science – to understand how water drops collide during the formation of clouds or to estimate the quantity of gas being dragged into our oceans by rainfall.”
 
The research, ‘Kinetic Effects in Dynamic Wetting’, is published in Physical Review Letters.
 
For more information, visit:-
 

 

Read more

Looking for signs of the first stars

It may soon be possible to detect the universe’s first stars by looking for the blue colour they emit on explosion.
 
The universe was dark and filled with hydrogen and helium for 100 million years following the Big Bang. Then, the first stars appeared, and metals were created by thermonuclear fusion reactions within stars.
 
Stars in the sky, ESA/Hubble [CC BY 4.0 (http://ift.tt/1eRPUFd)], via Wikimedia Commons
These metals were spread around the galaxies by exploding stars or ‘supernovae’. Studying first-generation supernovae, which are more than 13 billion years old, provides a glimpse into what the universe might have looked like when the first stars, galaxies and supermassive black holes formed. But to-date, it has been difficult to distinguish a first-generation supernova from a later one.
 
New research, led by Alexey Tolstov from the Kavli Institute for the Physics and Mathematics of the Universe, has identified characteristic differences between these supernovae types after experimenting with supernovae models based on observations of extremely metal-poor stars.
 
Similar to all supernovae, the luminosity of metal-poor supernovae shows a characteristic rise to a peak brightness followed by a decline. The phenomenon starts when a star explodes with a bright flash, caused by a shock wave emerging from its surface after its core collapses. This is followed by a long ‘plateau’ phase of almost constant luminosity lasting several months, followed by a slow exponential decay.
 
The team calculated the light curves of metal-poor blue versus metal-rich red supergiant stars. The shock wave and plateau phases are shorter, bluer and fainter in metal-poor supernovae. The team concluded that the colour blue could be used as an indicator of a first-generation supernova. In the near future, new, large telescopes, such as the James Webb Space Telescope scheduled to be launched in 2018, will be able to detect the first explosions of stars and may be able to identify them using this method.
 
For more information visit:-
 

 

Read more