Scientists Create Photonic Time Crystals That Amplify Light Exponentially
A new technology developed by the startup Lumicell, an MIT spinout, is providing surgeons with a real-time, in-depth view of breast cancer tissue during surgery, enhancing the precision and effectiveness of breast cancer procedures.
By using a handheld scanner in combination with an optical imaging agent, the device allows surgeons to immediately visualize residual cancer cells in the surgical cavity, ensuring more complete tumor removal. This innovation helps minimize the likelihood of leaving behind cancerous tissue, which could otherwise lead to follow-up surgeries.
The technology integrates advanced imaging techniques with AI algorithms, enabling surgeons to assess tumor margins in real-time, as opposed to the current standard where pathology results take days. With this immediate feedback, surgeons can make more informed decisions during the operation, potentially reducing recurrence rates and improving patient outcomes.
If widely adopted, Lumicell's approach could transform the standard of care by making surgeries more targeted, reducing the need for repeat procedures, and improving recovery times. The FDA's recent approval of Lumicell’s technology marks a significant step forward in personalized and precise cancer care
These AI systems analyze real-time data feeds from thousands of cameras positioned throughout the region and use pattern recognition to detect subtle changes in the environment, such as smoke or other early indicators of fire. When suspicious activity is spotted, the AI can alert fire response teams almost instantly, enabling them to take preventive measures before a small spark escalates into a large-scale wildfire.
AI also plays a role in predictive modeling, using historical data, weather patterns, and vegetation analysis to forecast where wildfires are most likely to occur. This helps in preemptively directing resources, such as clearing brush or positioning firefighting crews strategically, to areas at high risk.
The use of AI in wildfire detection offers significant benefits, including faster response times and more efficient allocation of firefighting resources. However, it also comes with challenges, such as ensuring the accuracy of AI predictions and managing the vast amounts of data collected.
Overall, California’s deployment of AI technology is part of a broader initiative to mitigate the devastating impact of wildfires and safeguard communities from the increasing frequency and severity of these events.
The primary applications of these neuronal "wearables" include detailed mapping of electrical and chemical signals in subcellular areas, which could provide deeper insights into how the brain functions at the most intricate levels. By accessing and monitoring these tiny regions, researchers can better understand processes like signal transmission and synaptic activity. This could lead to breakthroughs in understanding neurological diseases and disorders.
Moreover, there is potential for these devices to be used in therapeutic applications. For example, they could be engineered to deliver electrical stimulation or drugs directly to specific parts of the brain, possibly aiding in the restoration of lost brain functions or modifying neuronal activity to address disorders such as epilepsy or Parkinson's disease.
This new approach marks a significant step in neurotechnology, merging micro-engineering and neuroscience to create tools that are more integrated with biological structures than ever before.
Elon Musk projected on Tuesday at the Future Investment Initiative in Saudi Arabia that humanoid robots may surpass the human population by 2040. Musk envisions about 10 billion robots globally, enabled by advancements that could bring the cost down to between $20,000 and $25,000 for a "robot that can do anything." This aligns closely with Tesla's Optimus robot pricing, which Musk anticipates could reach $20,000 to $30,000 in the long term with mass production.
The Tesla Optimus project began in 2021 and, despite a rocky start with a human in a robot costume, has shown incremental progress. At Tesla's recent “We, Robot” event, Optimus units performed tasks such as handing out drinks and interacting with guests, though some actions were teleoperated to enhance performance. Tesla’s Optimus lead Milan Kovac confirmed that about 20 robots were active during the event, with minor incidents, including a robot fall.
Currently, two Optimus robots work on the factory floor, though Tesla has not specified their roles. Musk projected limited production to begin next year, targeting thousands of robots in Tesla facilities by 2025 and mass production by 2026, ultimately aiming for Optimus to be Tesla’s largest product line and potentially pushing Tesla's valuation to $25 trillion.
Tesla faces competition from companies like Figure AI, Apptronik, Toyota Research Institute, and Boston Dynamics, which are also investing heavily in humanoid robot technology.
Tesla CEO Elon Musk has introduced the Cybercab, the company’s highly anticipated robotaxi, setting its price at under $30,000. Musk also announced Tesla's intention to launch autonomous driving capabilities for its Model 3 and Model Y vehicles in California and Texas by next year.
The unveiling took place at the We, Robot event at Warner Bros. Studios in Burbank, California. Musk arrived in the Cybercab, donning his signature black leather jacket and accompanied by a man dressed as an astronaut. Human-like robots entertained the crowd, dancing and serving drinks to attendees, adding a futuristic touch to the celebration.
Prior to Tesla’s announcement, many analysts remained skeptical about the company’s ability to deliver on its long-standing promise of fully self-driving vehicles. Tesla’s robotaxi vision has been in the pipeline for nearly five years, with autonomous driving features teased for almost a decade.
At the We, Robot event, Musk revealed that 20 additional Cybercabs were present, along with 50 fully autonomous vehicles available for test drives across the 20-acre venue. He highlighted the Cybercab’s revolutionary design, featuring neither a steering wheel nor pedals and utilizing inductive charging instead of a plug.
Musk also noted that Tesla had “overspecced” the computer in each vehicle, employing an Amazon Web Services-like approach that allows computational power to be distributed across its vehicle network, enhancing efficiency and functionality.
Musk announced that Tesla expects the Cybercab to cost under $30,000 (approximately £22,980 or A$44,500). He projected the robotaxi to be in production "in 2026" before pausing and amending his estimate to “before 2027,” acknowledging his tendency toward optimistic timelines.
Envisioning a future transformed by autonomous vehicles, Musk described a world where parking lots could be repurposed as parks, and passengers could relax, sleep, or watch movies in a “comfortable little lounge” during their trips. He noted that Cybercabs could serve as Uber-like taxis when not in use by their owners and even suggested that people could operate fleets of these vehicles, creating ride-share networks akin to a “shepherd with a flock of cars.”
“It’s going to be a glorious future,” he declared.
Tesla’s Model 3 and Model Y vehicles are set to transition from supervised to fully unsupervised self-driving, starting in California and Texas next year, with expansion planned across the U.S. and globally as regulatory approvals permit. While the S and X models will also gain autonomous capabilities, Musk did not specify a timeline for these.
“With autonomy, you get your time back. It’ll save lives, a lot of lives, and prevent injuries,” Musk emphasized, citing Tesla’s extensive driving data collected from millions of vehicles as a key factor in making autonomous driving safer than human drivers.
“With that amount of training data, it’s obviously going to be much better than a human can be because you can’t live a million lives,” Musk stated. “It doesn’t get tired, and it doesn’t text. It’ll be 10, 20, even 30 times safer than a human.”
Musk also unveiled the “Robovan,” an autonomous van designed to carry up to 20 passengers and cargo, though he did not disclose pricing or a production timeline. In addition, he highlighted significant progress on Tesla’s humanoid robot, Optimus. As the robots moved among attendees to serve drinks, Musk urged, “Please be nice to the Optimus robots.” At the end of the event, several robots danced on a neon-lit stage to Daft Punk's Robot Rock, with Musk estimating a future production cost of around $30,000 per robot.
The event showcased Tesla’s autonomous innovations amid ongoing challenges. The company currently faces a class-action lawsuit in the U.S. from Tesla owners who had been promised full self-driving capabilities that remain undelivered. Following pressure from U.S. safety regulators in February last year, Tesla issued a recall to address software allowing speeding and other violations in its full self-driving mode. In April, regulators launched an investigation into whether Tesla’s full self-driving and autopilot systems were sufficiently ensuring that drivers remained attentive, prompted by reports of 20 crashes involving autopilot since the initial recall.
Researchers at Paderborn University in Germany have developed high-performance computing (HPC) software capable of analyzing and describing the quantum states of a photonic quantum detector.
HPC utilizes advanced classical computers to handle large datasets, conduct complex calculations, and swiftly tackle challenging problems. However, many classical computational methods cannot be directly applied to quantum applications. This new study indicates that HPC may offer valuable tools for quantum tomography, the technique employed to ascertain the quantum state of a quantum system.
In their study, the researchers state, “By developing customized open-source algorithms using high-performance computing, we have performed quantum tomography on a photonic quantum detector at a mega-scale.”
A quantum photonic detector is a sophisticated instrument designed to detect and measure individual light particles (photons). Highly sensitive, it can collect detailed information about various properties of photons, including their energy levels and polarization. This data is invaluable for quantum research, experiments, and technologies.
Accurately determining the quantum state of the photonic detector is crucial for achieving precise measurements. However, the process of performing quantum tomography on such an advanced tool requires handling large volumes of data.
This is where the newly developed HPC software comes into play. To showcase its capabilities, the researchers stated, “We performed quantum tomography on a megascale quantum photonic detector covering a Hilbert space of 106.”
Hilbert space is a mathematical concept that describes a multi-dimensional space where each point represents a possible state of a quantum system. It includes an inner product for calculating distances and angles between states, which is essential for understanding concepts such as probability and superposition. These spaces can possess infinite dimensions, representing a wide array of potential states.
With the HPC software, the researchers successfully “completed calculations that described the quantum photonic detector within a few minutes—faster than anyone else before,” they added.
This optimization enables them to manage and reconstruct quantum systems with up to 1012 elements. “This demonstrates the unprecedented extent to which this tool can be applied to quantum photonic systems,” said Timon Schapeler, the first author of the study and a research scientist at Paderborn University.
“As far as we know, our work is the first contribution in the field of classical high-performance computing that facilitates experimental quantum photonics on a large scale,” Schapeler added.
The HPC-driven quantum tomography approach holds promise for advancing more efficient data processing, quantum measurement, and communication technologies in the future.
The study is published in the journal Quantum Science and Technology.
Wireless data has been transmitted at a remarkable speed of 938 gigabits per second, which is over 9,000 times faster than the average speed of a current 5G phone connection. This speed is equivalent to downloading more than 20 average-length movies each second and sets a new record for multiplex data, where two or more signals are combined.
The high demand for wireless signals at large events such as concerts, sports games, and busy train stations often causes mobile networks to slow down significantly. This issue primarily arises from the limited bandwidth available to 5G networks. The portion of the electromagnetic spectrum allocated for 5G varies by country, typically operating at relatively low frequencies below 6 gigahertz and only within narrow frequency bands.
To enhance transmission rates, Zhixin Liu from University College London and his team have utilized a broader range of frequencies than any previous experiments, spanning from 5 gigahertz to 150 gigahertz, employing both radio waves and light.
Liu explains that while digital-to-analog converters are currently used to transmit zeros and ones as radio waves, they face challenges at higher frequencies. His team applied this technology to the lower portion of the frequency range and employed a different technique using lasers for the higher frequencies. By combining both methods, they created a wide data band that could be integrated into next-generation smartphones.
This innovative approach enabled the team to transmit data at 938 Gb/s, which is over 9,000 times faster than the average download speed of 5G in the UK. This capability could provide individuals with incredibly high data rates for applications that are yet to be imagined, and ensure that large groups of people can access sufficient bandwidth for streaming video.
While this achievement sets a record for multiplex data, single signals have been transmitted at even higher speeds, surpassing 1 terabit per second.
Liu likens splitting signals across wide frequency ranges to transforming the “narrow, congested roads” of current 5G networks into “10-lane motorways.” He notes, “Just like with traffic, wider roads are necessary to accommodate more cars.”
Liu mentions that his team is currently in discussions with smartphone manufacturers and network operators, expressing hope that future 6G technology will build on this work, although other competing approaches are also being developed.