Spock’s home system exists in reality!


It takes a little imagination to wish some favorite fictional universes into existence. But, for legions of "Star Trek" fans, they don’t have to wish: one star system really exists in our Milky Way galaxy. In Star Trek lore, Vulcan is the home of logic, learning and the deeply beloved first officer Mr. Spock. While Vulcan is fictional, the star system it belongs to–40 Eridani–is very real. It’s located only 16.5 light-years away from Earth and its primary star can be spotted with the naked eye. So how much is science fiction and how much is science fact?
“Could there be an Earth-like planet in this system? We have no way of knowing that now,” said Karl Stapelfeldt, chief scientist of NASA’s Exoplanet Exploration Program. So while Vulcan (as far as we know) doesn’t exist, a fascinating triple sunset would occur on any rocky planet in the system, because 40 Eridani has three stars that circle each other.
The most massive is 40 Eridani A, a dwarf star that is the mythical Vulcan’s sun. The other two are a pair, orbiting each other at a distance from 40 Eridani A. This binary pair contains a red dwarf (40 Eridani C)­ and a white dwarf star (40 Eridani B). From the surface of Vulcan, “they would gleam brilliantly in the Vulcan sky,” according to Rodenberry in his 1991 letter to Sky & Telescope magazine.

If you believe in science fiction, Mr. Spock’s dreamt-up world lives in the habitable zone of the largest star, 40 Eridani A. The habitable zone, shown as the area in blue-green, is the distance from a star where liquid water is said to exist. Too far away from its sun and Vulcan would freeze like Pluto; too close and it would sizzle like Mercury. Vulcan is perched on the inner edge, lending the world its imagined desert-like quality (at least, in a timeline where the planet remains undestroyed).


But if there were a planet like Vulcan in the 40 Eridani system, would we be able to see it? Not yet. “We don’t yet have a way to detect it, but NASA is working on the technology to make it possible,” Stapelfeldt said.

SpaceX Falcon 9 Launch and Landing

falcon9 SpaceX


One of the unintended consequence of blazing new trails is, often, capturing incredible photographs. That’s how we ended up with this gorgeous shot that shows SpaceX’s Falcon 9 rocket both taking off and landing… in a single long exposure.

This particular image was captured on July 18th, the night the Falcon 9 delivered the Dragon cargo spacecraft into orbit.

Once the payload was securely on its way to the International Space Station, the rocket’s first stage descended back to Earth where it made a successful landing that, no matter how many times you see it, should still inspire awe.

In total, about 9 minutes elapsed between takeoff and landing.

Scientists work toward storing digital information in DNA





Her computer, Karin Strauss says, contains her "digital attic"—a place where she stores that published math paper she wrote in high school, and computer science schoolwork from college.

She'd like to preserve the stuff "as long as I live, at least," says Strauss, 37. But computers must be replaced every few years, and each time she must copy the information over, "which is a little bit of a headache."

It would be much better, she says, if she could store it in DNA—the stuff our genes are made of.

Strauss, who works at Microsoft Research in Redmond, Washington, is working to make that sci-fi fantasy a reality.

She and other scientists are not focused in finding ways to stow high school projects or snapshots or other things an average person might accumulate, at least for now. Rather, they aim to help companies and institutions archive huge amounts of data for decades or centuries, at a time when the world is generating digital data faster than it can store it.

To understand her quest, it helps to know how companies, governments and other institutions store data now: For long-term storage it's typically disks or a specialized kind of tape, wound up in cartridges about three inches on a side and less than an inch thick. A single cartridge containing about half a mile of tape can hold the equivalent of about 46 million books of 200 pages apiece, and three times that much if the data lends itself to being compressed.

A tape cartridge can store data for about 30 years under ideal conditions, says Matt Starr, chief technology officer of Spectra Logic, which sells data-storage devices. But a more practical limit is 10 to 15 years, he says.

It's not that the data will disappear from the tape. A bigger problem is familiar to anybody who has come across an old eight-track tape or floppy disk and realized he no longer has a machine to play it. Technology moves on, and data can't be retrieved if the means to read it is no longer available, Starr says.

So for that and other reasons, long-term archiving requires repeatedly copying the data to new technologies.

Into this world comes the notion of DNA storage. DNA is by its essence an information-storing molecule; the genes we pass from generation to generation transmit the blueprints for creating the human body. That information is stored in strings of what's often called the four-letter DNA code. That really refers to sequences of four building blocks—abbreviated as A, C, T and G—found in the DNA molecule. Specific sequences give the body directions for creating particular proteins.


Digital devices, on the other hand, store information in a two-letter code that produces strings of ones and zeroes. A capital "A," for example, is 01000001.

Converting digital information to DNA involves translating between the two codes. In one lab, for example, a capital A can become ATATG. The idea is once that transformation is made, strings of DNA can be custom-made to carry the new code, and hence the information that code contains.

One selling point is durability. Scientists can recover and read DNA sequences from fossils of Neanderthals and even older life forms. So as a storage medium, "it could last thousands and thousands of years," says Luis Ceze of the University of Washington, who works with Microsoft on DNA data storage.

Advocates also stress that DNA crams information into very little space. Almost every cell of your body carries about six feet of it; that adds up to billions of miles in a single person. In terms of information storage, that compactness could mean storing all the publicly accessible data on the internet in a space the size of a shoebox, Ceze says.

In fact, all the digital information in the world might be stored in a load of whitish, powdery DNA that fits in space the size of a large van, says Nick Goldman of the European Bioinformatics Institute in Hinxton, England.

What's more, advocates say, DNA storage would avoid the problem of having to repeatedly copy stored information into new formats as the technology for reading it becomes outmoded.

"There's always going to be someone in the business of making a DNA reader because of the health care applications," Goldman says. "It's always something we're going to want to do quickly and inexpensively."

Getting the information into DNA takes some doing. Once scientists have converted the digital code into the 4-letter DNA code, they have to custom-make DNA. For some recent research Strauss and Ceze worked on, that involved creating about 10 million short strings of DNA.

Twist Bioscience of San Francisco used a machine to create the strings letter by letter, like snapping together Lego pieces to build a tower. The machine can build up to 1.6 million strings at a time.

Each string carried just a fragment of information from a digital file, plus a chemical tag to indicate what file the information came from.

In this Oct. 18, 1962 file photo, Dr. Maurice Hugh Frederick Wilkins, 46, of Greenwich, England, stands with a model of a DNA molecule during a news conference in the New York office of the Sloan-Kettering Institute for Cancer Research. Specific sequences of four building blocks—abbreviated as A, C, T and G—found in the DNA molecule give an organism directions for creating particular proteins. Wilkins shared the Nobel Prize for medicine with two other biochemists, Drs. Francis Harry Compton Crick and James Dewey Watson. (AP Photo/Anthony Camerano)


To read a file, scientists use the tags to assemble the relevant strings. A standard lab machine can then reveal the sequence of DNA letters in each string.

Nobody is talking about replacing hard drives in consumer computers with DNA. For one thing, it takes too long to read the stored information. That's never going to be accomplished in seconds, says Ewan Birney, who works on DNA storage with Goldman at the bioinformatics institute.

But for valuable material like corporate records in long-term storage, "if it's worth it, you'll wait," says Goldman, who with Birney is talking to investors about setting up a company to offer DNA storage.

Sri Kosuri of the University of California Los Angeles, who has worked on DNA information storage but now largely moved on to other pursuits, says one challenge for making the technology practical is making it much cheaper.

Scientists custom-build fairly short strings DNA now for research, but scaling up enough to handle information storage in bulk would require a "mind-boggling" leap in output, Kosuri says. With current technology, that would be hugely expensive, he says.

George Church, a prominent Harvard genetics expert, agrees that cost is a big issue. But "I'm pretty optimistic it can be brought down" dramatically in a decade or less, says Church, who is in the process of starting a company to offer DNA storage methods.

For all the interest in the topic, it's worth noting that so far the amount of information that researchers have stored in DNA is relatively tiny.

Earlier this month, Microsoft announced that a team including Strauss and Ceze had stored a record 200 megabytes. The information included 100 books—one, fittingly, was "Great Expectations"— along with a brief video and many documents. But it was still less than 5 percent the capacity of an ordinary DVD.

Yet it's about nine times the mark reported just last month by Church, who says the announcement shows "how fast the field is moving."

Meanwhile, people involved with archiving digital data say their field views DNA as a possibility for the future, but not a cure-all.

"It's a very interesting and promising approach to the storage problem, but the storage problem is really only a very small part of digital preservation," says Cal Lee, a professor at the University of North Carolina's School of Information and Library Science.

It's true that society will probably always have devices to read DNA, so that gets around the problem of obsolete readers, he says. But that's not enough.

"If you just read the ones and zeroes, you don't know how to interpret it," Lee says.

For example, is that string a picture, text, a sound clip or a video? Do you still have the software to make sense of it?

What's more, the people in charge of keeping digital information want to check on it periodically to make sure it's still intact, and "I don't know how viable that is with DNA," says Euan Cochrane, digital preservation manager at the Yale University Library. It may mean fewer such check-ups, he says.

Cochrane, who describes his job as keeping information accessible "10 years to forever," says DNA looks interesting if its cost can be reduced and scientists find ways to more quickly store and recover information.

Starr says his data-storage device company hasn't taken a detailed look at DNA technology because it's too far in the future.

There are "always things out on the horizon that could store data for a very long time," he says. But the challenge of turning those ideas into a practical product "really trims the field down pretty quickly."

Smallest Hard Disk can store information per atom!

Every day, modern society creates more than a billion gigabytes of new data. To store all this data, it is increasingly important that each single bit occupies as little space as possible. A team of scientists at the Kavli Institute of Nanoscience at Delft University reduced storage to the ultimate limit: They stored one kilobyte (8,000 bits) representing each bit by the position of a single chlorine atom. "In theory, this storage density would allow all books ever created by humans to be written on a single post stamp," says lead scientist Sander Otte. They reached a storage density of 500 Terabits per square inch (Tbpsi), 500 times better than the best commercial hard disk currently available.

Feynman


In 1959, physicist Richard Feynman challenged his colleagues to engineer the world at the smallest possible scale. In his famous lecture There's Plenty of Room at the Bottom, he speculated that if we had a platform allowing us to arrange individual atoms in an exact orderly pattern, it would be possible to store one piece of information per atom. To honor the visionary Feynman, Otte and his team coded a section of Feynman's lecture on an area 100 nanometers wide.


Sliding puzzle


The team used a scanning tunneling microscope (STM), which uses a sharp needle to probe the atoms of a surface one by one. Scientists can use these probes to push the atoms around. "You could compare it to a sliding puzzle," Otte explains. "Every bit consists of two positions on a surface of copper atoms, and one chlorine atom that we can slide back and forth between these two positions. If the chlorine atom is in the top position, there is a hole beneath it—we call this a one. If the hole is in the top position and the chlorine atom is on the bottom, then the bit is a zero." Because the chlorine atoms are surrounded by other chlorine atoms, except near the holes, they keep each other in place. That is why this method with holes is much more stable than methods with loose atoms, and more suitable for data storage.



STM scan (96 nm wide, 126 nm tall) of the 1 kB memory, written to a section of 'There is plenty of room at the bottom' by Richard Feynman (with text markup). Credit: Ottelab/TUDelft


Codes


The researchers from Delft organized their memory in blocks of eight bytes (64 bits). Each block has a marker, made of the same type of holes as the raster of chlorine atoms. Inspired by the pixelated square barcodes (QR codes) often used to scan tickets for airplanes and concerts, these markers work like miniature QR codes that carry information about the precise location of the block on the copper layer. The code will also indicate if a block is damaged—for instance, due to some local contaminant or an error in the surface. This allows the memory to be scaled up easily to very large sizes, even if the copper surface is not entirely perfect.
Explanation of the bit logic and the atomic markers. Credit: Ottelab/TUDelft


Datacenters


The new approach offers excellent prospects in terms of stability and scalability. Still, this type of memory should not be expected in datacenters soon. Otte: "In its current form, the memory can operate only in very clean vacuum conditions and at liquid nitrogen temperature (77 K), so the actual storage of data on an atomic scale is still some way off. But through this achievement we have certainly come a big step closer."

Scientists develop way to upsize nanostructures into Flexible 3D printed material




For years, scientists and engineers have synthesized materials at the nanoscale level to take advantage of their mechanical, optical, and energy properties, but efforts to scale these materials to larger sizes have resulted in diminished performance and structural integrity.
Now, researchers led by Xiaoyu "Rayne" Zheng, an assistant professor of mechanical engineering at Virginia Tech have published a study in the journal Nature Materials that describes a new process to create lightweight, strong and super elastic 3-D printed metallic nanostructured materials with unprecedented scalability, a full seven orders of magnitude control of arbitrary 3-D architectures.

Strikingly, these multiscale metallic materials have displayed super elasticity because of their designed hierarchical 3-D architectural arrangement and nanoscale hollow tubes, resulting in more than a 400 percent increase of tensile elasticity over conventional lightweight metals and ceramic foams.

The approach, which produces multiple levels of 3-D hierarchical lattices with nanoscale features, could be useful anywhere there's a need for a combination of stiffness, strength, low-weight, high flexibility—such as in structures to be deployed in space, flexible armors, lightweight vehicles and batteries, opening the door for applications in aerospace, military and automotive industries.

Natural materials, such as trabecular bone and the toes of geckoes, have evolved with multiple levels 3-D architectures spanning from the nanoscale to the macroscale. Human-made materials have yet to achieve this delicate control of structural features.


"Creating 3-D hierarchical micro features across the entire seven orders of magnitude in structural bandwidth in products is unprecedented," said Zheng, the lead author of the study and the research team leader. "Assembling nanoscale features into billets of materials through multi-leveled 3-D architectures, you begin to see a variety of programmed mechanical properties such as minimal weight, maximum strength and super elasticity at centimeter scales."

The process Zheng and his collaborators use to create the material is an innovation in a digital light 3-D printing technique that overcomes current tradeoffs between high resolution and build volume, a major limitation in scalability of current 3-D printed microlattices and nanolattices.
Related materials that can be produced at the nanoscale such as graphene sheets can be 100 times stronger than steel, but trying to upsize these materials in three dimensions degrades their strength eight orders of magnitude - in other words, they become 100 million times less strong."The increased elasticity and flexibility obtained through the new process and design come without incorporating soft polymers, thereby making the metallic materials suitable as flexible sensors and electronics in harsh environments, where chemical and temperature resistance are required," Zheng added.

These multi-leveled hierarchical lattice also means more surface area is available to collect photons energies as they can enter the structure from all directions and be collected not just on the surface, like traditional photovoltaic panels, but also inside the lattice structure. One of the great opportunities this study creates is the ability to produce multi-functional inorganic materials such as metals and ceramics to explore photonic and energy harvesting properties in these new materials

Besides Zheng, team members include Virginia Tech graduate research students Huachen Cui and Da Chen from Zheng's group, and colleagues from Lawrence Livermore National Laboratory. The research was conducted under the Department of Energy Lawrence Livermore Laboratory-directed research support with additional support from Virginia Tech, the SCHEV fund from the state of Virginia, and the Defense Advanced Research Projects agency.

Relativistic jets and the Collapsar model






Gamma-ray bursts (GRBs) were first detected by American nuclear detection surveillance satellites in the late 1960s. The Vela spacecraft series were designed to monitor world-wide compliance with the 1963 Nuclear Test Ban Treaty. The satellites detected no clandestine nuclear explosions, but they discovered something far more interesting: powerful bursts of gamma rays emanating from random directions in space. By analyzing the different arrival times of the bursts as detected by different satellites, scientists concluded that the sources of the bursts were cosmic and not terrestrial or solar. The discovery was declassified and published in 1973 as an Astrophysical Journal article entitled “Observations of Gamma-Ray Bursts of Cosmic Origin”. This alerted the astronomical community to the existence of gamma-ray bursts (GRBs), now recognized as the most violent events in the universe.

To this day GRBs remain one of the greatest mysteries of modern astronomy. We know that GRBs lasting less than 2 seconds (short GRBs) may originate from a variety of processes. There are several theories that explain how the energy from a gamma-ray burst progenitor is turned into radiation. One hypothesis describing how long gamma-ray bursts originate is called the “collapsar” model: gamma-rays are generated when massive, spinning stars collapse to form black holes and spew out powerful jets of plasma at nearly the speed of light.These stellar collapses (collapsars) are thought to be similar to supernovae, except that a jet is produced by the accretion of stellar material onto a compact object formed at the center of the collapsing star.


These jets are called ’relativistic jets’ and they can transport the energy from the collapsed core to large distances. Inside the jet, the uneven distribution of temperature, density and pressure create internal shock waves that move inward and outward as faster regions within the jet collide with slower ones. The collisions between the fast-moving gas and its surroundings, as well as within the jet itself, create gamma rays. When the jet hits the surrounding interstellar medium it produces another shock wave. This causes particles to rapidly lose energy (fast cooling), due to the strong magnetic field in the GRB emission region, through a process known as synchrotron radiation. This phenomenon is observed as long gamma-ray bursts and it’s followed by a so-called “afterglow”, a slowly fading emission that can be seen at all wavelengths; starting with X-rays, followed by ultraviolet, visible and infrared light, and eventually radio waves.The afterglow can last for days or even weeks.

Credit: NASA’s Goddard Space Flight Center

NASA is taking a huge risk flying Juno near Europa

The Juno spacecraft orbiting around Jupiter right now is giving us a lot of reasons to get excited. But there’s one huge worry lurking in the back of NASA scientists’ minds. It’s possible the spacecraft could crash into and contaminate Jupiter’s icy moon Europa, one of the most likely places we might find alien life in the solar system.