News Ticker

Menu

Browsing "Older Posts"

Browsing Category "TECHNOLOGY"

Startup Lumicell Revolutionizes Breast Cancer Surgery with Real-Time Tissue Imaging

Monday, November 11, 2024 / No Comments

 

A new technology developed by the startup Lumicell, an MIT spinout, is providing surgeons with a real-time, in-depth view of breast cancer tissue during surgery, enhancing the precision and effectiveness of breast cancer procedures.

 By using a handheld scanner in combination with an optical imaging agent, the device allows surgeons to immediately visualize residual cancer cells in the surgical cavity, ensuring more complete tumor removal. This innovation helps minimize the likelihood of leaving behind cancerous tissue, which could otherwise lead to follow-up surgeries.

The technology integrates advanced imaging techniques with AI algorithms, enabling surgeons to assess tumor margins in real-time, as opposed to the current standard where pathology results take days. With this immediate feedback, surgeons can make more informed decisions during the operation, potentially reducing recurrence rates and improving patient outcomes.

 If widely adopted, Lumicell's approach could transform the standard of care by making surgeries more targeted, reducing the need for repeat procedures, and improving recovery times. The FDA's recent approval of Lumicell’s technology marks a significant step forward in personalized and precise cancer care​

How AI is Helping California Prevent Wildfires Before They Start

Tuesday, November 5, 2024 / No Comments


 California has adopted advanced AI technology to identify potential wildfires before they ignite, aiming to combat the increasing threat posed by these natural disasters. The state is leveraging a network of cameras, satellites, and sensors combined with machine learning algorithms to monitor vast, fire-prone areas for early signs of trouble.

These AI systems analyze real-time data feeds from thousands of cameras positioned throughout the region and use pattern recognition to detect subtle changes in the environment, such as smoke or other early indicators of fire. When suspicious activity is spotted, the AI can alert fire response teams almost instantly, enabling them to take preventive measures before a small spark escalates into a large-scale wildfire.

AI also plays a role in predictive modeling, using historical data, weather patterns, and vegetation analysis to forecast where wildfires are most likely to occur. This helps in preemptively directing resources, such as clearing brush or positioning firefighting crews strategically, to areas at high risk.


The use of AI in wildfire detection offers significant benefits, including faster response times and more efficient allocation of firefighting resources. However, it also comes with challenges, such as ensuring the accuracy of AI predictions and managing the vast amounts of data collected.


Overall, California’s deployment of AI technology is part of a broader initiative to mitigate the devastating impact of wildfires and safeguard communities from the increasing frequency and severity of these events.

Wearable Devices for Neurons: Probing Brain Function and Restoration

Saturday, November 2, 2024 / No Comments


 MIT Scientists have developed innovative "wearable" devices that can wrap around neurons, offering new possibilities for probing and interacting with subcellular regions of the brain. These microscopic devices are designed to conform tightly to individual neurons, allowing for high-precision measurements and interactions at the cellular level. The concept is similar to wearable technology for humans but scaled down to interact directly with cells.

The primary applications of these neuronal "wearables" include detailed mapping of electrical and chemical signals in subcellular areas, which could provide deeper insights into how the brain functions at the most intricate levels. By accessing and monitoring these tiny regions, researchers can better understand processes like signal transmission and synaptic activity. This could lead to breakthroughs in understanding neurological diseases and disorders.

Moreover, there is potential for these devices to be used in therapeutic applications. For example, they could be engineered to deliver electrical stimulation or drugs directly to specific parts of the brain, possibly aiding in the restoration of lost brain functions or modifying neuronal activity to address disorders such as epilepsy or Parkinson's disease.


This new approach marks a significant step in neurotechnology, merging micro-engineering and neuroscience to create tools that are more integrated with biological structures than ever before.

Elon Musk Predicts 10 Billion Humanoid Robots by 2040 Priced at $20K-$25K Each

Wednesday, October 30, 2024 / No Comments

 

Tesla CEO Elon Musk predicted on Tuesday that humanoid robots could outnumber humans within the next 20 years, projecting that advancements in robotics will drive widespread adoption. Musk shared that his company is working to turn this vision into reality, aiming to make humanoid robots accessible and scalable at costs between $20,000 and $25,000 each

Elon Musk projected on Tuesday at the Future Investment Initiative in Saudi Arabia that humanoid robots may surpass the human population by 2040. Musk envisions about 10 billion robots globally, enabled by advancements that could bring the cost down to between $20,000 and $25,000 for a "robot that can do anything." This aligns closely with Tesla's Optimus robot pricing, which Musk anticipates could reach $20,000 to $30,000 in the long term with mass production.

The Tesla Optimus project began in 2021 and, despite a rocky start with a human in a robot costume, has shown incremental progress. At Tesla's recent “We, Robot” event, Optimus units performed tasks such as handing out drinks and interacting with guests, though some actions were teleoperated to enhance performance. Tesla’s Optimus lead Milan Kovac confirmed that about 20 robots were active during the event, with minor incidents, including a robot fall.

Currently, two Optimus robots work on the factory floor, though Tesla has not specified their roles. Musk projected limited production to begin next year, targeting thousands of robots in Tesla facilities by 2025 and mass production by 2026, ultimately aiming for Optimus to be Tesla’s largest product line and potentially pushing Tesla's valuation to $25 trillion.

Tesla faces competition from companies like Figure AI, Apptronik, Toyota Research Institute, and Boston Dynamics, which are also investing heavily in humanoid robot technology.

Elon Musk Unveils Tesla Cybercab: A Fully Autonomous Robotaxi

Tuesday, October 29, 2024 / No Comments

 

Elon Musk Reveals Tesla Cybercab Robotaxi, Promises Sub-$30,000 Autonomous Car by 2027 and a 20-Passenger 'Robovan

Tesla CEO Elon Musk has introduced the Cybercab, the company’s highly anticipated robotaxi, setting its price at under $30,000. Musk also announced Tesla's intention to launch autonomous driving capabilities for its Model 3 and Model Y vehicles in California and Texas by next year.

The unveiling took place at the We, Robot event at Warner Bros. Studios in Burbank, California. Musk arrived in the Cybercab, donning his signature black leather jacket and accompanied by a man dressed as an astronaut. Human-like robots entertained the crowd, dancing and serving drinks to attendees, adding a futuristic touch to the celebration.

Prior to Tesla’s announcement, many analysts remained skeptical about the company’s ability to deliver on its long-standing promise of fully self-driving vehicles. Tesla’s robotaxi vision has been in the pipeline for nearly five years, with autonomous driving features teased for almost a decade.

At the We, Robot event, Musk revealed that 20 additional Cybercabs were present, along with 50 fully autonomous vehicles available for test drives across the 20-acre venue. He highlighted the Cybercab’s revolutionary design, featuring neither a steering wheel nor pedals and utilizing inductive charging instead of a plug.

Musk also noted that Tesla had “overspecced” the computer in each vehicle, employing an Amazon Web Services-like approach that allows computational power to be distributed across its vehicle network, enhancing efficiency and functionality.



Musk announced that Tesla expects the Cybercab to cost under $30,000 (approximately £22,980 or A$44,500). He projected the robotaxi to be in production "in 2026" before pausing and amending his estimate to “before 2027,” acknowledging his tendency toward optimistic timelines.

Envisioning a future transformed by autonomous vehicles, Musk described a world where parking lots could be repurposed as parks, and passengers could relax, sleep, or watch movies in a “comfortable little lounge” during their trips. He noted that Cybercabs could serve as Uber-like taxis when not in use by their owners and even suggested that people could operate fleets of these vehicles, creating ride-share networks akin to a “shepherd with a flock of cars.”

“It’s going to be a glorious future,” he declared.

Tesla’s Model 3 and Model Y vehicles are set to transition from supervised to fully unsupervised self-driving, starting in California and Texas next year, with expansion planned across the U.S. and globally as regulatory approvals permit. While the S and X models will also gain autonomous capabilities, Musk did not specify a timeline for these.

“With autonomy, you get your time back. It’ll save lives, a lot of lives, and prevent injuries,” Musk emphasized, citing Tesla’s extensive driving data collected from millions of vehicles as a key factor in making autonomous driving safer than human drivers.

“With that amount of training data, it’s obviously going to be much better than a human can be because you can’t live a million lives,” Musk stated. “It doesn’t get tired, and it doesn’t text. It’ll be 10, 20, even 30 times safer than a human.”


Musk also unveiled the “Robovan,” an autonomous van designed to carry up to 20 passengers and cargo, though he did not disclose pricing or a production timeline. In addition, he highlighted significant progress on Tesla’s humanoid robot, Optimus. As the robots moved among attendees to serve drinks, Musk urged, “Please be nice to the Optimus robots.” At the end of the event, several robots danced on a neon-lit stage to Daft Punk's Robot Rock, with Musk estimating a future production cost of around $30,000 per robot.

The event showcased Tesla’s autonomous innovations amid ongoing challenges. The company currently faces a class-action lawsuit in the U.S. from Tesla owners who had been promised full self-driving capabilities that remain undelivered. Following pressure from U.S. safety regulators in February last year, Tesla issued a recall to address software allowing speeding and other violations in its full self-driving mode. In April, regulators launched an investigation into whether Tesla’s full self-driving and autopilot systems were sufficiently ensuring that drivers remained attentive, prompted by reports of 20 crashes involving autopilot since the initial recall.


Groundbreaking Achievement: High-Performance Computing Analyzes Quantum Photonics on a Large Scale for the First Time

Monday, October 28, 2024 / No Comments

Scientists at Paderborn University have successfully utilized high-performance computing, represented by their supercomputer Noctua, to conduct a large-scale analysis of a quantum photonics experiment for the first time.

Researchers at Paderborn University in Germany have developed high-performance computing (HPC) software capable of analyzing and describing the quantum states of a photonic quantum detector.

HPC utilizes advanced classical computers to handle large datasets, conduct complex calculations, and swiftly tackle challenging problems. However, many classical computational methods cannot be directly applied to quantum applications. This new study indicates that HPC may offer valuable tools for quantum tomography, the technique employed to ascertain the quantum state of a quantum system.

In their study, the researchers state, “By developing customized open-source algorithms using high-performance computing, we have performed quantum tomography on a photonic quantum detector at a mega-scale.”

HPC enables mega-scale quantum tomography

A quantum photonic detector is a sophisticated instrument designed to detect and measure individual light particles (photons). Highly sensitive, it can collect detailed information about various properties of photons, including their energy levels and polarization. This data is invaluable for quantum research, experiments, and technologies.

Accurately determining the quantum state of the photonic detector is crucial for achieving precise measurements. However, the process of performing quantum tomography on such an advanced tool requires handling large volumes of data.

This is where the newly developed HPC software comes into play. To showcase its capabilities, the researchers stated, “We performed quantum tomography on a megascale quantum photonic detector covering a Hilbert space of 10610^6.”

Hilbert space is a mathematical concept that describes a multi-dimensional space where each point represents a possible state of a quantum system. It includes an inner product for calculating distances and angles between states, which is essential for understanding concepts such as probability and superposition. These spaces can possess infinite dimensions, representing a wide array of potential states.

With the HPC software, the researchers successfully “completed calculations that described the quantum photonic detector within a few minutes—faster than anyone else before,” they added.

Classical Computing Breakthroughs Spark New Advances in Quantum Technology

HPC is not just limited to determining the state of the quantum photonic detector. By leveraging the inherent structure of quantum tomography, the researchers were able to enhance the efficiency of the process.

This optimization enables them to manage and reconstruct quantum systems with up to 101210^{12} elements. “This demonstrates the unprecedented extent to which this tool can be applied to quantum photonic systems,” said Timon Schapeler, the first author of the study and a research scientist at Paderborn University.

“As far as we know, our work is the first contribution in the field of classical high-performance computing that facilitates experimental quantum photonics on a large scale,” Schapeler added.

The HPC-driven quantum tomography approach holds promise for advancing more efficient data processing, quantum measurement, and communication technologies in the future.

The study is published in the journal Quantum Science and Technology.


The Future of Connectivity: 6G Networks Expected to Outpace 5G by 9,000 Times

Saturday, October 26, 2024 / No Comments

 

Next-generation phone networks could significantly surpass current ones
due to a novel method for transmitting multiple data streams across a
wide range of frequencies.

Wireless data has been transmitted at a remarkable speed of 938 gigabits per second, which is over 9,000 times faster than the average speed of a current 5G phone connection. This speed is equivalent to downloading more than 20 average-length movies each second and sets a new record for multiplex data, where two or more signals are combined.

The high demand for wireless signals at large events such as concerts, sports games, and busy train stations often causes mobile networks to slow down significantly. This issue primarily arises from the limited bandwidth available to 5G networks. The portion of the electromagnetic spectrum allocated for 5G varies by country, typically operating at relatively low frequencies below 6 gigahertz and only within narrow frequency bands.

To enhance transmission rates, Zhixin Liu from University College London and his team have utilized a broader range of frequencies than any previous experiments, spanning from 5 gigahertz to 150 gigahertz, employing both radio waves and light.

Liu explains that while digital-to-analog converters are currently used to transmit zeros and ones as radio waves, they face challenges at higher frequencies. His team applied this technology to the lower portion of the frequency range and employed a different technique using lasers for the higher frequencies. By combining both methods, they created a wide data band that could be integrated into next-generation smartphones.

This innovative approach enabled the team to transmit data at 938 Gb/s, which is over 9,000 times faster than the average download speed of 5G in the UK. This capability could provide individuals with incredibly high data rates for applications that are yet to be imagined, and ensure that large groups of people can access sufficient bandwidth for streaming video.

While this achievement sets a record for multiplex data, single signals have been transmitted at even higher speeds, surpassing 1 terabit per second.

Liu likens splitting signals across wide frequency ranges to transforming the “narrow, congested roads” of current 5G networks into “10-lane motorways.” He notes, “Just like with traffic, wider roads are necessary to accommodate more cars.”

Liu mentions that his team is currently in discussions with smartphone manufacturers and network operators, expressing hope that future 6G technology will build on this work, although other competing approaches are also being developed.

Robotics: Flying a robot through a virtual reality helmet

Wednesday, May 6, 2015 / No Comments

Visiting a museum or monument, enjoy a sunset at the end of the world, all without leaving your home: this is what promises the platform developed at the University of Pennsylvania (USA) through its Remote mobile robot. With this system, a person with an Oculus Rift virtual reality headset can direct the camera filming the scene with a camera. The pilot thus has a subjective view immersive.

In the near future, the combined advances in robotics and virtual reality will result in multiple applications and novel applications. We think of extra-bodily experiences such as the lunar rover developed by students and researchers from Carnegie Mellon University in the US, and which aims to make available to the public exploration of the Moon person view .

In a more down to earth perspective, the combination of a virtual reality headset and a remote-controlled robot could help rescue teams assess with greater precision the conditions in areas difficult to access or hazardous, not to mention many military applications. And why not imagine that one day visit a museum or a monument to discover the other side of the world as if you were there?

It is precisely this kind of immersion that has sought to reproduce a team from the University of Pennsylvania (USA) with its Project Dora (Dexterous Roving Observational Automaton in English) which presents itself as a "teleoperated robotic platform immersive ". It is an associate Oculus Rift virtual reality headset to a remote-controlled robot whose head is equipped with two video cameras.

The person wearing the helmet somehow sees through the eyes of the robot, as if she was there. If this kind of technique is not new in itself, innovation lies in the freedom of movement offered by Dora. The system tracks the precise movements of the head in six degrees of freedom. The objective is to reach a level of immersion as the person has the impression of actually being there. The demonstration video published on Vimeo, shows that human and machine are virtually body.

Faithfully reproduce the movements of the head

The Oculus helmet detects both the orientation of the head with its inertial and position using infrared beacons. The information is transmitted to Arduino microcontrollers and Edison (manufactured by Intel) through which the robot reproduces the movements of the head. The cameras filmed the craft with a resolution of 976 x 582 pixels at 30 frames per second. The Oculus helmet could however support a higher quality.

The main technical challenge that had to be overcome was to minimize the latency that occurs between the time the person moves his head and when the video display helmet renders this action. Meanwhile, several steps must be conducted in a very short time: the computer receives information about the movement, treatment, capture of the corresponding video image and return to Oculus helmet. Currently, the system lags Dora latency of about 70 milliseconds. According Oculus, the acceptable minimum to guarantee the immersion and realism of virtual reality is 60 milliseconds. The difference is not huge, knowing that, in the case of Dora project, designers must, in addition, to deal with the speed of the wireless connection between the headset and the robot as well as the friction of moving parts . However, they believe they can optimize the system to reduce the gap.

Currently, the wireless connection between the operator and the robot is via radio link with a range up to 7 km. For commercial use, the system should be based on Wi-Fi networks or 4G cell type whose performance should be sufficient to avoid excessive latency, which is not necessarily easy. For now, Dora is primarily a proof of concept and the project team has not decided on a possible commercial project.

Record: Autonomous Car Delphi travels 5471 km

Wednesday, April 8, 2015 / No Comments

Delphi, previously unknown manufacturer of electronic components to the general public, achieved the longest journey ever undertaken autonomous driving in North America. Throughout the crossing of the United States of 5471 km, the Audi SQ5 has adapted in real conditions to different situations (weather, highway exits, diversions, works etc.).

As it had announced, Delphi Automotive PLC completed a US crossing autonomous car. On March 22, a specially equipped Audi SQ5 is part of San Francisco to New York on a journey of 5471 km. Equipped with a multitude of radar sensors, cameras and microprocessors, it carried 99% of the way without human assistance. Based in the UK, Delphi already developing components for autonomous driving systems. This experience has enabled it to assess the extent of his expertise in a variety of driving conditions. The experience is recounted in a press release and on the Delphi Drive web page.

From month to month, car manufacturers and other brands multiply their ads on innovation in the autonomous car industry. At the New York Auto Show last Thursday, the CEO of Nissan, Carlos Ghosn, has promised that autonomous cars would reach Japanese roads by the end of 2016 and they would be able to navigate on highways as on urban roads without help a human operator before 2020.

Six experts accompanied the Autonomous Audi

Given this context, it is surprising that Delphi is the first to cross the US from coast to coast. His journey is the longest ever done in autonomous driving on US roads. For nine days the SQ5 in question has crossed 15 states and, as expected, met many potentially difficult situations: bad weather, aggressive surrounding drivers deviations for work ... so that a human operator can understand and respond to these conditions, a computer system potentially more difficult to interpret.


Six Delphi experts accompanied the Autonomous Audi, either inside the vehicle or in a second car to receive and analyze the data produced by the sensors and systems responsible for autonomous driving. The trip has generated more than 2 terabytes of data on the capabilities of the car, including the automatic parking, highway driving, lane change, the outputs from the highway and city driving. "The performance of our car during this trip were outstanding and they have exceeded our expectations, observes Jeff Owens, chief technology officer at Delphi. The intelligence gained through this trip will help us optimize our existing security products and accelerate the development of future products. »

print a 3D object with an ultra-fast 3D printer

Thursday, March 19, 2015 / No Comments

To print a 3D object, it can take several hours and the resulting mechanical strength is not always up to expectations. A team of US researchers has developed a new process that prints 3D objects less fragile and more quickly. They even created their start-up, Carbon3D.

The announcement was published a few days ago in Science magazine. A team from North Carolina State University researchers (USA), directed by Joseph DeSimone, was able to accelerate and smooth the 3D printing process through what could be called a simple trick chemist. Clip for Continuous Production Liquid Interface: this is the name they gave to this innovative technology to accelerate the production of 25 to 100 times.
Recall that the additive manufacturing technology, known today as the 3D printing has emerged in the mid 1980s it possible to manufacture an object through a deposition of material, layer by layer. According Wholers Report 2014, the market for 3D printing has tripled in just three years, from 2013 to more than $ 3.07 billion in the world. For numerous applications in the armaments industry, aerospace, medicine or research, to name a few.

The end of the print layer by layer?

Traditionally, 3D printers therefore build a layer by layer object by applying an ultra-violet radiation and a liquid photosensitive resin, for example. It then hardens under the effect of radiation. Once a first solidified layer, the object is pulled up to allow the creation of a second layer and a third, a fourth, etc. Depending on the size and complexity of the object, this process, known as stereolithography, can last hours. Indeed, the radiation must be interrupted between each layer, the time to add a little more liquid resin. The resulting object can present some mechanical failures, precisely because of its layered composition.


The North Carolina college team proposes to proceed continuously, rather than layer by layer. To do this, chemists have chosen an oxygen-permeable material to form the bottom of the container which contains the resin bath, oxygen preventing the resin from solidifying under the effect of ultraviolet radiation. They were able to create a kind of "dead zone" of a few tens of microns thick only. A zone in which the resin is still liquid. What help form objects continuously without having to wait for feedback fresh liquid resin to advance.

More solid objects and a greater choice of materials

This new process finally has several advantages. The first has already been cited. It allows to make things faster. Typically, where printing stereolithography takes more than 11 hours, one conducted by video would take just over six minutes ...! By overcoming the manufacturing layer by layer, CLIP also can produce less fragile objects. US chemists ensure, the method also makes possible the use of materials that do not suit until then to 3D printing, such as flexible or rubbery materials. The use of elastomers could well afford to print sneakers or auto parts.


Convinced of the richness of their idea, North Carolina State University researchers have launched a start-up called Carbon3D. Its objective is to develop a 3D printer that can be marketed in the year. "We are eager to find out how engineers around the world manage to implement it at the heart of their projects," says Joseph DeSimone.

A tire to recharge the battery of an electric car?

Saturday, March 14, 2015 / No Comments

The North American manufacturer Goodyear has unveiled a concept car tire combining thermoelectric and piezoelectric components that transform heat and electricity in motion. What bring additional energy may extend the range of electric vehicles.

There are a few months ago, we mentioned the work carried out in the US by researchers at Oak Ridge National Laboratory of the Department of Energy. The team managed to extract carbon black from used tires and used it to manufacture the anode of a lithium-ion battery. In a radically opposite approach, Goodyear is working on a tire capable of generating electricity to charge the batteries of an electric or hybrid car. At the Geneva Motor Show, which began last week, the manufacturer has unveiled what is still only a concept called BH03.

This tire associate thermoelectric and piezoelectric components. For the former, it would be a rubber texture called "ultra black" can absorb the heat generated when the vehicle is parked and the tire is heated by the sun. In the case of the piezoelectric components, it is question of a braiding incorporated in the carcass which would react to rolling and deformation of the tire. Goodyear provides no precise data on the performance of BH03, merely stating that it could extend the range of an electric car. Energy recovery remains one of the crucial areas of research for manufacturers. This Goodyear seems to be an attractive idea, at least on paper.

A specific rim to recover energy

However, many technical issues remain, particularly about the conversion system and battery power. The concept suggests that the tire should be associated with a specific rim which could make the connection with the energy recovery module. It is also unclear whether electricity production would be correlated to the vehicle speed. Moreover, Goodyear did not advance any arrival of such a product.


This project recalls another equally ambitious on which board the German brand Audi. Its R & D department is exploring the possibility of using the car's shock absorbers to generate electricity. Like a kinetic energy recovery system (KERS), the idea is to capture the heat produced by the heating of the suspension which, according to Audi, can reach between 100 and 125 ° C. Audi have developed a shock absorber incorporating a synchronous generator with the mechanical movement of the workpiece. But again, the manufacturer does not indicate specific roadmap.

The solar airplane "Solar Impulse2" took off from Abu Dhabi on a round-the-world trip

Wednesday, March 11, 2015 / No Comments

The solar airplane Solar Impulse, the SI2, took off early this morning in Abu Dhabi for the first leg of the round-the-world trip, in the direction of Muscat in the Sultanate of Oman. Originally scheduled Saturday, the flight had been delayed because of the wind, a binding setting for this very slow airplane. The journey of 35,000 kilometers last five months and will include twelve steps

This morning at 3 o'clock 12 Universal Time, 4 12 pm French time, or 7 am local time 12, HB-SIB, piloted by André Borschberg, took off from the runway at Al Bateen Airport reserved in business aviation in Abu Dhabi, capital of UAE. SI2 went east to a fairly short stage - a little over 400 km - which will be covered in a dozen hours. Solar plane 72 m wingspan, powered by four engines of 13.5 kW (17.5 hp) each, flies slowly (about 80 km / h), and not in a straight line to avoid too troublesome winds. André Borschberg will see the track from Muscat International Airport in Oman, and in the evening.
Solar plane took off at sunrise with weak batteries charged (by sunlight, soil, during the previous days), but its 17,248 photovoltaic cells produce good weather, more power than the engines and avionics consume. With its batteries charged the HB-SIB will land tonight. As explained André Borschberg during flights on the HB-SIA prototype, with this plane, we can "refuel while driving."
This world tour five months culmination of an adventure that began twelve years ago and currently involves approximately 80 people, does not intend to test a prototype for future aircraft.

The technologies used are innovative, both in the capture and management of the energy in the construction of the aircraft, which is like no other. The Solar Impulse team wants to demonstrate as well as alternative energy sources to fossil is an opportunity to find new technical solutions in multiple areas.

A cuff-thermometer solar

Saturday, February 28, 2015 / No Comments

A team of Tokyo University researchers recently presented a flexible cuff-thermometer. Self-powered by a solar panel, it sounds an alarm when the patient's temperature becomes too high. Made from organic components through an inkjet printer, cheap and disposable product is intended for use in a hospital setting.

The development of sensors to monitor vital functions is growing, whether smart textiles for sports or medical devices. Examples are numerous developments around the epidermal stamps. The stakes of these innovations are to design devices that are both minimally invasive, energy efficient and inexpensive to produce.

It is in this context that a team of Tokyo University researchers has developed a flexible cuff thermometer powered by a solar panel. Used on the skin or clothes, it beeps when the body temperature of the user exceeds the preset threshold can be between 36.5 and 38.5 ° C.

The cuff combines a flexible solar panel consisting of solar cells, amorphous silicon (a-Si), a piezoelectric speaker, a temperature sensor and a power supply circuit. The latter has been made from organic components deposited by an ink jet printer on a polymer film.

All elements of the cuff thermometer are flexible (flexible components), either photovoltaic cells (solar cells), the piezoelectric speaker (piezoelectric speaker), the fuel system and its management based on organic compounds (organic circuits) or the temperature sensor placed under the arm. © University of Tokyo

An inexpensive, disposable product for hospitals

The researchers say this is the first time such a device works with an organic origin supply circuit. This allows to increase by more than seven efficiency of the solar panel in indoor lighting conditions. This type of circuit is also configured to emit a sound: again, this would be a first. Flexible, self-powered, this armband could be manufactured at a low cost. What consider a single use, which perfectly appropriate for use respecting the rules of hygiene in hospitals.


Moreover, designers cuff specify that this system could be combined with the detection of other vital functions such as heart rate, blood pressure or sweating. The concept has been presented at the International Solid State Circuits Conference (IEEE) held this week in San Francisco (United States). There are currently no commercial project for this innovation.

Volvo Announces Production-Viable Autonomous Car

Monday, February 23, 2015 / No Comments

Among carmakers, Volvo is one of the strongest defenders of heading toward oneself autos. It propelled its "Drive Me" experimental run system to test autos on open streets in Gothenburg, Sweden a year ago, and now its revealing a further refinement of the innovation.

Volvo promises its improved Autopilot system is usually a "complete, production-viable autonomous travel system. " The company plans to set it straight into 100 analyze vehicles—which are going to be driven by simply actual customers close to Gothenburg—by 2017.

Like other autonomous programs, Autopilot will depend on several sensors to allow a vehicle to navigate itself. Those include a windshield-mounted digital camera and radar unit shared with the 2016 XC90, but beyond that you have enough digital cameras and scanners to make the NSA impact.

A 360-degree sensor sweep can be done thanks in order to four extra radar units—two in each bumper—cameras at all of the four corners in the car, a trifocal camera inside windshield, ultrasonic receptors, and a new front-mounted laser beam scanner.

The many information accumulated by this equipment is when compared with a cloud-stored 3d map along with GPS data to help you the vehicle determine where it can be going.

Test cars will likely use a new cloud-based link with link having local targeted visitors control focuses. This offers the cars having real-time targeted visitors information, and permits authorities to tell drivers to turn off the autonomous systems in the case of safety problems.

Volvo says the system can handle many techniques from heavy traffic to emergency situations, nonetheless it isn't designed to figure in inclement weather conditions. In that situation—or if your glitch is detected—the program hands control time for the human driver.

Initially, the system will simply be tested on roads without the need of oncoming traffic, drivers, or pedestrians, Volvo says.

Given Volvo's celebrated obsession with safety, it's not surprising how the Swedish carmaker is so invested in autonomy. Proponents of self-driving cars expect these to significantly reduce the amount of crashes.

Volvo also believes autonomous cars and trucks could drastically cut overall fuel consumption, and free up more hours for the humans inside.

Cool buildings without air conditioning

Sunday, November 30, 2014 / No Comments

In the US, air-conditioning systems are responsible for about 15% of the energy consumed by the building sector. With the new material, still in the laboratory, invented by researchers at Stanford University, substantial savings could be made in this area.

Stanford University engineers have developed an innovative coating to help cool buildings. Their findings, recently published in Nature, indicate that their process would maintain the interior temperature to around 5 ° C below the outside temperature. Without having to rely on any energy source. At the heart of the invention, an ultra-thin material (only 1.8 microns thick) and multilayer whose action is twofold. One, it helps to reflect radiation from the sun to prevent heat from entering the building; two, it can absorb the heat inside the building to return it to the outside, without heating the surrounding air.


In the South, the homes are usually covered with a white roof that reflects sunlight. What limit the heat input into the houses. The same principle applies with remarkable efficiency, the coating invented by the team of Professor Fan Shanhui. The material refers in fact 97% of solar radiation that strikes it. The real innovation, however, based on its second property. Recall that the objects, like living beings emit heat as infrared radiation, invisible to the naked eye. It is the heat of this radiation we feel, for example, when we stand before a closed furnace. This even as the coating developed by Stanford researchers can evacuate to the outside of buildings.

This new material is composed of seven layers of varying thickness of silicon dioxide (SiO2) and hafnium (HfO2) deposited on a thin silver layer. These layers constitute a structure which is capable both of reflecting the incoming radiation but also to absorb heat to the inner retransmitting to infrared wavelengths between 8 and 13 micrometers. The molecules in the air can not absorb the heat emitted in the wavelength: the surrounding air does not heat, heat is directly released into space.

Some technical difficulties to solve

For now, the prototype is not larger than a pizza. Its designers ensure that such material can be economically viable if it is shaped so as to find practical applications. One solution could come from spraying the material onto a solid support may be installed on the roofs. He will still find a way to guide the internal heat to the outer coating so that it can evacuate it.

Despite these difficulties, the team is confident Shanhui Fan. Professor of Electrical Engineering sees this project as a first step in the use of the universe as a heat sink accessible and unlimited scope. Until its coating can be installed on the rooftops of the world, it ensures that it can improve the performance of existing cooling systems, helping them to evacuate their waste heat into space.

Intel wants to use the body for transferring data

Monday, October 13, 2014 / No Comments

Two students participating in a summer internship at Intel have developed a tactile interface to transfer data between two devices via a powered solely by electromagnetic energy ring generated by the human body. The system currently allows to store a few bytes, but it opens up interesting prospects for interaction between objects connected.

Using the human body to transfer data between two computers, that is what managed to make two students participating in the program Collaborators Intel internship. Over a period of three months full-time, it aims to train student teams to "solve technical problems in the real world." In this case, Patrick Jr. and Arsen Buah Zoksimovski have created a device that allows you to copy and paste between two laptops using a ring.


For this, they installed two touch sensors on PC. When the wearer of the ring puts his finger on the first sensor, the application proposes to transfer him. It selects the desired file via Windows Explorer and it is then transmitted to the ring form of electromagnetic signal, the body playing as an antenna and transmitter-receiver ring. The user then places his finger on the touch sensor of the second laptop, select the "recovery" and the file is copied instantly option.

The ring is not powered by a battery, testing was done with a file only a few bytes containing an emoticon. But the two students have already approved for other uses. For example, the ability to retrieve GPS coordinates of a mapping application and transfer to a navigation device by touching it. Or, you can store on the ring ID of a file and then just touch a compatible printer to obtain a hard copy.

A software framework and APIs have been developed to exploit the method with various applications. Most of the work focused on the analysis of electromagnetic waves and how to manage interference caused by their passage through the human body. The objective was to obtain a stable enough signal to read, copy, store and transmit data via a device that is powered by the electromagnetic energy.

"We had to adjust the electrical circuit using capacitors and inductors, in order to reduce signal loss to a minimum and get the most stable connection possible in order to transfer data. This should be done quickly so you do not have to keep your finger on the touch sensor more than a second, "explains Patrick Buah Jr. and Arsen Zoksimovski in the statement released by Intel. They see two possible outlets for their invention. Built for connected objects or intelligent clothing, such communication protocol could create a network in which the human body is the channel. Various objects connected communicate with each other through electromagnetic waves without external power. The second possibility is that of an object or logged clothing that interacts with a third terminal of the computer type, printer, smartphone, screen, etc. Communication could even be between two people touching to exchange data. Intel, however, did not refer to specific project based on this technology.

DHL will use drones to deliver medication to North Sea island

Sunday, September 28, 2014 / No Comments

DHL has obtained permission to use a drone to deliver packages on the German island of Juist. This is the first time that this type of aircraft can fly in Europe.

Called Parcelcopter, this drone will be used primarily for the delivery of medical supplies for a pharmacy located on the island of Juist, northern Germany. The unit has received approval to fly in a limited area, the German Minister of Transport. Drones can browse the 12 km separating the mainland from the island of Juist and carry parcels weighing up to 1.2 kg, flying mainly over the sea.
submit to reddit
According to DHL, the service is expected to begin as early as Friday to extend until the end of October, weather permitting. The 1,700 residents of the island will improve the timeliness of drugs in emergencies. With 5 kg and a top speed of 60 km / h, the Parcelcopter can reach the island in 15-30 minutes.


If the proposed DHL is successful, it is possible that the service grows and competition Amazon Prime Air. However, in Germany as elsewhere, the drones are subject to limiting regulations. These devices can not take off or land in populated areas, they must be controlled by remote control and therefore do not behave as autonomous drones. Their altitude may yet exceed 20 meters.

Generate electricity from chewing, it is possible!

Friday, September 19, 2014 / No Comments

A team of Canadian researchers has developed a prototype piezoelectric chin able to collect energy from the movements of the jaw. Although it is not viable, the concept demonstrates the possibility of power small electronic devices such as hearing aids and cochlear implants.

Recently in South Korea, researchers presented a prototype piezoelectric nano-generator capable of powering a pacemaker due to muscle movements which would be fixed. Recall that a piezoelectric material produces electricity when subjected to (and vice versa) mechanical stress. It is precisely from this material that a team from a specialized engineering university located in Montreal, (ETS) in Canada has developed a method to obtain energy from the movements of the jaw.
submit to reddit

This is a prototype chin made ​​of piezoelectric composite material that has the potential to generate enough electricity to power small electronic devices such as cochlear implants wireless headphones, earphones or. "The jaw movements that occur when a person chews gum, eating or talking are the most promising to raise the energy level of the muscle activity of head," the researchers said in their paper published in the journal Materials and Structures smart. They calculated that for a typical day, jaw movements can produce up to 580 joules, equivalent to 7 milliwatts.

STU chin is made of a layer of piezoelectric fiber composite (PFC) incorporating the electrodes in a polymer adhesive matrix. This reacts to the movement of the jaw. This strip is placed under the chin, which is maintained by elastic connected to headphones. The PFC band is itself connected to a load resistor and a digital multimeter.

To prove their concept, the researchers asked a tester chewed gum for 60 seconds. Maximum energy harvested up to 18 mW, but the average was 10 mW. However, the ETS team admits that "the amount of energy generated by the system is not able to be used in practical applications."

One goal is to increase the number of piezoelectric elements to produce enough to power small portable electronics electricity. This can be done by superimposing several layers of PFC. Twenty layers would be enough to power an electronic hearing protection scientists say. But it is likely that ergonomics will cause greater problems, because the band would measure 6 mm thick. To ensure that a person can use this type of chin is free and attracts no prying eyes will not be easy ...

new Robot cheetah from MIT can run and jump silently

Thursday, September 18, 2014 / No Comments

There are two years, the robot Cheetah beat speed records in running. The robotic cat returns in a new quieter fully electric version capable of running on uneven ground and jump over obstacles.

There is a little over two years, Futura-Sciences presented the quadruped robot Cheetah (cheetah in English), the king of sprint treadmill capable of reaching 30 km / h. A real technical prowess tempered by the fact that the robot was connected to a hydraulic pump powered by a combustion engine, which greatly limited his mobility. But a team of researchers from the Massachusetts Institute of Technology (MIT) has just introduced an evolution with a new control algorithm and an electric propulsion system.

This new robot Cheetah is both quieter and more independent than its predecessor. He can run at 16 km / h and jump over obstacles 33 cm high. MIT researchers ensure that this new version could reach 48 km / h. "This is the first time we demonstrate a robot powered by electric motors can run and jump over obstacles," said one of the team members in a video posted on YouTube Cheetah.


To achieve this result, the MIT team has developed a very high electric motor with a torque-controlled amplifiers. Then we had to develop an algorithm capable of reproducing the special kinematics when running cheetah. His front and hind legs bound in tandem to allow it to reach its full speed of up to 60 km / h.

a bit faster than Usain Bolt

When the cat jumped, his paws touch the ground for a split second before taking off again. Biomechanics, the percentage of time that the leg is in contact with the mainland is called the "duty cycle". More animal runs fast, more this cycle is short the researchers at MIT.

To reproduce this, the algorithm of the robot controls the force that each leg exercises during the split second or it touches the ground, which helps maintain a given speed. "Once I know how long my leg is on the ground and how long it is in the air, I know how hard I need to apply to compensate for the gravitational force, says Professor Kim Sangbae working on Cheetah the project from the beginning. Now we know how to control the jumps at different paces. And to jump, we just have to triple the strength and the robot crosses barriers. "The scientist added that this approach based on force control is close to how run sprint champions like Usain Bolt. This technique ensures the stability and agility of this quadruped robot, including over rough terrain, as evidenced by the video. Moreover, the robot is almost as quiet as its life model.


This technology could become a standard for all biped or quadruped robots. MIT researchers believe it could even be adapted to manufacture a new type of prosthetic legs, and even invent a new mode of transportation to replace the car!

Scientists at the Cavendish Laboratory of the Cambridge University for the first time developed a graphene based flexible screen

Wednesday, September 17, 2014 / No Comments

A flexible screen graphene-based working prototype has been unveiled by the University of Cambridge and the company Plastic Logic. This is the first time that graphene is used to manage the electronic part of a flexible display screen. This technical solution could be a simpler and less expensive to manufacture this type of display.

This is a top that opens great opportunities to commercialize flexible displays. Graphene Center at the University of Cambridge and manufacturer of Plastic Logic flexible displays just submitted a graphene-based flexible display prototype. This is a monochrome screen with identical active matrix electrophoretic patterns found in e-readers, except that it is made ​​of plastic, not glass. One can see a demonstration in the video posted on YouTube. Backplane, the electronic part that deals with the display includes an electrode in graphene. According to its designers, this is the first time that this material is used for this purpose.


An advantage of this solution is that the flexibility would create completely folding graphene screens. "This is an important step in enabling the manufacture of fully flexible and portable devices," said Professor Andrea Ferrari, director of graphene Cambridge. The back plate has a density of 150 pixels per inch incorporates an electrode associated with replacing graphene film electrode layer of metal deposited by sputtering electrodeposition. The result is that this technique can make the plane back cool by printing the graphene on the substrate.

The technique can be transposed to the LCD and OLED

The prototype in question was produced at a temperature of 100 ° C from the technology of thin film transistor developed by Plastic Logic. The electrode thus deposited was then cut to the micrometer scale to create the necessary connections. This process could help simplify and reduce the cost of consistently manufacturing flexible displays. "The potential of graphene is well known, but now we need to develop the industrial process engineering to get the graphene laboratories to industry," says Indro Mukerjee, CEO of Plastic Logic. This British company founded in 2000, is itself the result of the work of the Cavendish Laboratory at Cambridge University.


The next step is to translate this technique to LCD and OLED display systems for flexible color screens that refresh rate is fast enough for video playback. The stated goal is to produce a flexible OLED screen in the coming year.