Page 3«..2345..»

Archive for the ‘Quantum Computer’ Category

The future of artificial intelligence and quantum computing – Military & Aerospace Electronics

Posted: September 1, 2020 at 10:55 am


without comments

NASHUA, N.H. -Until the 21st Century, artificial intelligence (AI) and quantum computers were largely the stuff of science fiction, although quantum theory and quantum mechanics had been around for about a century. A century of great controversy, largely because Albert Einstein rejected quantum theory as originally formulated, leading to his famous statement, God does not play dice with the universe.

Today, however, the debate over quantum computing is largely about when not if these kinds of devices will come into full operation. Meanwhile, other forms of quantum technology, such as sensors, already are finding their way into military and civilian applications.

Quantum technology will be as transformational in the 21st Century as harnessing electricity was in the 19th, Michael J. Biercuk, founder and CEO of Q-CTRL Pty Ltd in Sydney, Australia, and professor of Quantum Physics & Quantum Technologies at the University of Sydney, told the U.S. Office of Naval Research in a January 2019 presentation.

On that, there is virtually universal agreement. But when and how remains undetermined.

For example, asked how and when quantum computing eventually may be applied to high-performance embedded computing (HPEC), Tatjana Curcic, program manager for Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) of the U.S. Defense Advanced Research Projects Agency in Arlington, Va., says its an open question.

Until just recently, quantum computing stood on its own, but as of a few years ago people are looking more and more into hybrid approaches, Curcic says. Im not aware of much work on actually getting quantum computing into HPEC architecture, however. Its definitely not mainstream, probably because its too early.

As to how quantum computing eventually may influence the development, scale, and use of AI, she adds:

Thats another open question. Quantum machine learning is a very active research area, but is quite new. A lot of people are working on that, but its not clear at this time what the results will be. The interface between classical data, which AI is primarily involved with, and quantum computing is still a technical challenge.

Quantum information processing

According to DARPAs ONISQ webpage, the program aims to exploit quantum information processing before fully fault-tolerant quantum computers are realized.This quantum computer based on superconducting qubits is inserted into a dilution refrigerator and cooled to a temperature less than 1 Kelvin. It was built at IBM Research in Zurich.

This effort will pursue a hybrid concept that combines intermediate-sized quantum devices with classical systems to solve a particularly challenging set of problems known as combinatorial optimization. ONISQ seeks to demonstrate the quantitative advantage of quantum information processing by leapfrogging the performance of classical-only systems in solving optimization challenges, the agency states. ONISQ researchers will be tasked with developing quantum systems that are scalable to hundreds or thousands of qubits with longer coherence times and improved noise control.

Researchers will also be required to efficiently implement a quantum optimization algorithm on noisy intermediate-scale quantum devices, optimizing allocation of quantum and classical resources. Benchmarking will also be part of the program, with researchers making a quantitative comparison of classical and quantum approaches. In addition, the program will identify classes of problems in combinatorial optimization where quantum information processing is likely to have the biggest impact. It will also seek to develop methods for extending quantum advantage on limited size processors to large combinatorial optimization problems via techniques such as problem decomposition.

The U.S. government has been the leader in quantum computing research since the founding of the field, but that too is beginning to change.

In the mid-90s, NSA [the U.S. National Security Agency at Fort Meade, Md.] decided to begin on an open academic effort to see if such a thing could be developed. All that research has been conducted by universities for the most part, with a few outliers, such as IBM, says Q-CTRLs Biercuk. In the past five years, there has been a shift toward industry-led development, often in cooperation with academic efforts. Microsoft has partnered with universities all over the world and Google bought a university program. Today many of the biggest hardware developments are coming from the commercial sector.

Quantum computing remains in deep space research, but there are hardware demonstrations all over the world. In the next five years, we expect the performance of these machines to be agented to the point where we believe they will demonstrate a quantum advantage for the first time. For now, however, quantum computing has no advantages over standard computing technology. quantum computers are research demonstrators and do not solve any computing problems at all. Right now, there is no reason to use quantum computers except to be ready when they are truly available.

AI and quantum computing

Nonetheless, the race to develop and deploy AI and quantum computing is global, with the worlds leading military powers seeing them along with other breakthrough technologies like hypersonics making the first to successfully deploy as dominant as the U.S. was following the first detonations of atomic bombs. That is especially true for autonomous mobile platforms, such as unmanned aerial vehicles (UAVs), interfacing with those vehicles onboard HPEC.

Of the two, AI is the closest to deployment, but also the most controversial. A growing number of the worlds leading scientists, including the late Stephen Hawking, warn real-world AI could easily duplicate the actions of the fictional Skynet in the Terminator movie series. Launched with total control over the U.S. nuclear arsenal, Skynet became sentient and decided the human race was a dangerous infestation that needed to be destroyed.

The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence, it will take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldnt compete and would be superseded. Stephen Hawking (2014)

Such dangers have been recognized at least as far back as the publication of Isaac Asimovs short story, Runabout, in 1942, which included his Three Laws of Robotics, designed to control otherwise autonomous robots. In the story, the laws were set down in 2058:

First Law A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Whether it would be possible to embed and ensure unbreakable compliance with such laws in an AI system is unknown. But limited degrees of AI, known as machine learning, already are in widespread use by the military and advanced stages of the technology, such as deep learning, almost certainly will be deployed by one or more nations as they become available. More than 50 nations already are actively researching battlefield robots.

Military quantum computing

AI-HPEC would give UAVs, next-generation cruise missiles, and even maneuverable ballistic missiles the ability to alter course to new targets at any point after launch, recognize counter measures, avoid, and misdirect or even destroy them.

Quantum computing, on the other hand, is seen by some as providing little, if any, advantage over traditional computer technologies, by many as requiring cooling and size, weight and power (SWaP) improvements not possible with current technologies to make it applicable to mobile platforms and by most as being little more than a research tool for perhaps decades to come.

Perhaps the biggest stumbling block to a mobile platform-based quantum computing is cooling it currently requires a cooling unit, at near absolute zero, the Military trusted computing experts are considering new generations of quantum computing for creating nearly unbreakable encryption for super-secure defense applications.size of a refrigerator to handle a fractional piece of quantum computing.

A lot of work has been done and things are being touted as operational, but the most important thing to understand is this isnt some simple physical thing you throw in suddenly and it works. That makes it harder to call it deployable youre not going to strap a quantum computing to a handheld device. A lot of solutions are still trying to deal with cryogenics and how do you deal with deployment of cryo, says Tammy Carter, senior product manager for GPGPUs and software products at Curtiss-Wright Defense Solutions in Ashburn, Va.

AI is now a technology in deployment. Machine learning is pretty much in use worldwide, Carter says. Were in a migration of figuring out how to use it with the systems we have. quantum computing will require a lot of engineering work and demand may not be great enough to push the effort. From a cryogenically cooled electronics perspective, I dont think there is any insurmountable problem. It absolutely can be done, its just a matter of decision making to do it, prioritization to get it done. These are not easily deployed technologies, but certainly can be deployed.

Given its current and expected near-term limitations, research has increased on the development of hybrid systems.

The longer term reality is a hybrid approach, with the quantum system not going mobile any time soon, says Brian Kirby, physicist in the Army Research Laboratory Computational & Informational Sciences Directorate in Adelphi, Md. Its a mistake to forecast a timeline, but Im not sure putting a quantum computing on such systems would be valuable. Having the quantum computing in a fixed location and linked to the mobile platform makes more sense, for now at least. There can be multiple quantum computers throughout the country; while individually they may have trouble solving some problems, networking them would be more secure and able to solve larger problems.

Broadly, however, quantum computing cant do anything a practical home computer cant do, but can potentially solve certain problems more efficiently, Kirby continues. So youre looking at potential speed-up, but there is no problem a quantum computing can solve a normal computer cant. Beyond the basics of code-breaking and quantum simulations affecting material design, right now we cant necessarily predict military applications.

Raising concerns

In some ways similar to AI, quantum computing raises nearly as many concerns as it does expectations, especially in the area of security. The latest Thales Data Threat Report says 72 percent of surveyed security experts worldwide believe quantum computing will have a negative impact on data security within the next five years.

At the same time, quantum computing is forecast to offer more robust cryptography and security solutions. For HPEC, that duality is significant: quantum computing can make it more difficult to break the security of mobile platforms, while simultaneously making it easier to do just that.

Quantum computers that can run Shors algorithm [leveraging quantum properties to factor very large numbers efficiently] are expected to become available in the next decade. These algorithms can be used to break conventional digital signature schemes (e.g. RSA or ECDSA), which are widely used in embedded systems today. This puts these systems at risk when they are used in safety-relevant long-term applications, such as automotive systems or critical infrastructures. To mitigate this risk, classical digital signature schemes used must be replaced by schemes secure against quantum computing-based attacks, according to the August 2019 proceedings of the 14th International Conference on Availability, Reliability & Securitys Post-Quantum Cryptography in Embedded Systems report.

The security question is not quite so clean-cut as armor/anti-armor, but there is a developing bifurcation between defensive and offensive applications. On the defense side, deployed quantum systems are looked at to provide encoded communications. Experts say it seems likely the level of activity in China about quantum communications, which has been a major focus for years, runs up against the development of quantum computing in the U.S. The two aspects are not clearly one-against-one, but the two moving independently.

Googles quantum supremacy demonstration has led to a rush on finding algorithms robust against quantum attack. On the quantum communications side, the development of attacks on such systems has been underway for years, leading to a whole field of research based on identifying and exploiting quantum attacks.

Quantum computing could also help develop revolutionary AI systems. Recent efforts have demonstrated a strong and unexpected link between quantum computation and artificial neural networks, potentially portending new approaches to machine learning. Such advances could lead to vastly improved pattern recognition, which in turn would permit far better machine-based target identification. For example, the hidden submarine in our vast oceans may become less-hidden in a world with AI-empowered quantum computers, particularly if they are combined with vast data sets acquired through powerful quantum-enabled sensors, according to Q-CTRLs Biercuk.

Even the relatively mundane near-term development of new quantum-enhanced clocks may impact security, beyond just making GPS devices more accurate, Biercuk continues. Quantum-enabled clocks are so sensitive that they can discern minor gravitational anomalies from a distance. They thus could be deployed by military personnel to detect underground, hardened structures, submarines or hidden weapons systems. Given their potential for remote sensing, advanced clocks may become a key embedded technology for tomorrows warfighter.

Warfighter capabilities

The early applications of quantum computing, while not embedded on mobile platforms, are expected to enhance warfighter capabilities significantly.

Jim Clark, director of quantum hardware at Intel Corp. in Santa Clara, Calif., shows one of the companys quantum processors.There is a high likelihood quantum computing will impact ISR [intelligence, surveillance and reconnaissance], solving logistics problems more quickly. But so much of this is in the basic research stage. While we know the types of problems and general application space, optimization problems will be some of the first where we will see advantages from quantum computing, says Sara Gamble, quantum information sciences program manager at ARL.

Biercuk says he agrees: Were not really sure there is a role for quantum computing in embedded computing just yet. quantum computing is right now very large systems embedded in mainframes, with access by the cloud. You can envision embedded computing accessing quantum computing via the cloud, but they are not likely to be very small, agile processors you would embed in a SWAP-constrained environment.

But there are many aspects of quantum technology beyond quantum computing; the combination of quantum sensors could allow much better detection in the field, Biercuk continues. The biggest potential impact comes in the areas of GPS denial, which has become one of the biggest risk factors identified in every blueprint around the world. quantum computing plays directly into this to perform dead reckoning navigation in GPS denial areas.

DARPAs Curcic also says the full power of quantum computing is still decades away, but believes ONISQ has the potential to help speed its development.

The main two approaches industry is using is superconducting quantum computing and trapped ions. We use both of those, plus cold atoms [Rydberg atoms]. We are very excited about ONISQ and seeing if we can get anything useful over classical computing. Four teams are doing hardware development with those three approaches, she says.

Because these are noisy systems, its very difficult to determine if there will be any advantages. The hope is we can address the optimization problem faster than today, which is what were working on with ONISQ. Optimization problems are everywhere, so even a small improvement would be valuable.

Beyond todays capabilities

As to how quantum computing and AI may impact future warfare, especially through HPEC, she adds: I have no doubt quantum computing will be revolutionary and well be able to do things beyond todays capabilities. The possibilities are pretty much endless, but what they are is not crystal clear at this point. Its very difficult, with great certainly, to predict what quantum computing will be able to do. Well just have to build and try. Thats why today is such an exciting time.

Curtiss Wrights Carter says he believes quantum computing and AI will be closely linked with HPEC in the future, once current limitations with both are resolved.

AI itself is based on a lot of math being done in parallel for probability answers, similar to modeling the neurons in the brain highly interconnected nodes and interdependent math calculations. Imagine a small device trying to recognize handwriting, Carter says. You run every pixel of that through lots and lots of math, combining and mixing, cutting some, amplifying others, until you get a 98 percent answer at the other end. quantum computing could help with that and researchers are looking at how you would do that, using a different level of parallel math.

How quantum computing will be applied to HPEC will be the big trick, how to get that deployed. Imagine were a SIGINT [signals intelligence] platform land, air or sea there are a lot of challenges, such as picking the right signal out of the air, which is not particularly easy, Carter continues. Once you achieve pattern recognition, you want to do code breaking to get that encrypted traffic immediately. Getting that on a deployed platform could be useful; otherwise you bring your data back to a quantum computing in a building, but that means you dont get the results immediately.

The technology research underway today is expected to show progress toward making quantum computing more applicable to military needs, but it is unlikely to produce major results quickly, especially in the area of HPEC.

Trapped ions and superconducting circuits still require a lot of infrastructure to make them work. Some teams are working on that problem, but the systems still remain room-sized. The idea of quantum computing being like an integrated circuit you just put on a circuit board were a very long way from that, Biercuk says. The systems are getting smaller, more compact, but there is a very long way to go to deployable, embeddable systems. Position, navigation and timing systems are being reduced and can be easily deployed on aircraft. Thats probably where the technology will remain in the next 20 years; but, eventually, with new technology development, quantum computing may be reduced to more mobile sizes.

The next 10 years are about achieving quantum advantage with the systems available now or iterations. Despite the acceleration we have seen, there are things that are just hard and require a lot of creativity, Biercuk continues. Were shrinking the hardware, but that hardware still may not be relevant to any deployable system. In 20 years, we may have machines that can do the work required, but in that time we may only be able to shrink them to a size that can fit on an aircraft carrier local code-breaking engines. To miniaturize this technology to put it on, say, a body-carried system, we just dont have any technology basis to claim we will get there even in 20 years. Thats open to creativity and discovery.

Even with all of the research underway worldwide, one question remains dominant.

The general challenge is it is not clear what we will use quantum computing for, notes Rad Balu, a computer scientist in ARLs Computational & Informational Sciences Directorate.

The rest is here:

The future of artificial intelligence and quantum computing - Military & Aerospace Electronics

Written by admin

September 1st, 2020 at 10:55 am

Posted in Quantum Computer

Researchers Found Another Impediment for Quantum Computers to Overcome – Dual Dove

Posted: at 10:55 am


without comments

Maintaining qubits stable will be the pivot to realizing the potential of quantum computing, and now researchers have managed to discover a new obstacle to this stability: natural radiation.

Natural or background radiation is produced by various sources, both natural and artificial. Cosmic rays produce natural radiation, for instance, and so do concrete buildings. It is surrounding us all the time, and so this poses something of an issue for future quantum computers.

After numerous experiments that modified the level of natural radiation around qubits, physicists could establish that this background noise does indeed push qubits off balance in a way that hinders them from operating properly.

Our study is the first to show clearly that low-level ionizing radiation in the environment degrades the performance of superconducting qubits,says physicist John Orrell, from the Pacific Northwest National Laboratory (PNNL). These findings suggest that radiation shielding will be necessary to attain long-sought performance in quantum computers of this design.

Natural radiation is under no circumstance the most important or the only menace to qubit stability, which is basically known as coherence; everything from temperature variations to electromagnetic fields is able to mess with the qubit.

However, scientists say if were to attain a future where quantum computers are performing most of our advanced computing needs, then this hindrance from natural radiation is going to have to be addressed.

After the team that carried out the study was faced with issues regarding superconducting qubit decoherence, it decided to examine the possible problem with natural radiation. They discovered it breaks up a main quantum binding known as theCooper pairof electrons.

The radiation breaks apart matched pairs of electrons that typically carry electric current without resistance in a superconductor,says physicist Brent VanDevender, from PNNL. The resistance of those unpaired electrons destroys the delicately prepared state of a qubit.

Regular computers can be distorted by the same issues that impact qubits, but quantum states are a lot more delicate and sensitive. One of the reasons that we dont have authentic full-scale quantum computers at the moment is that theres no way yet to keep qubits stable for more than a few milliseconds at a time.

If we can develop on that, the benefits when it comes to computing power could be gigantic: while classical computer bits can only be set as 1 or 0, qubits can be set as 1,0, or both at the same time, a state known assuperposition.

Researchers have managed to get it happening, but only for a very short period, and in an extremely controlled setting. The good news, however, is that scientists like those at PNNL are dedicated to the challenge of discovering how to make quantum computers a reality, and with the new finding, we know a bit more about what weve to overcome.

Practical quantum computing with these devices will not be possible unless we address the radiation issue,says VanDevender. Without mitigation, radiation will limit the coherence time of superconducting qubits to a few milliseconds, which is insufficient for practical quantum computing.

A paper detailing the research has been published in the journalNature.

Known for her passion for writing, Paula contributes to both Science and Health niches here at Dual Dove.

See original here:

Researchers Found Another Impediment for Quantum Computers to Overcome - Dual Dove

Written by admin

September 1st, 2020 at 10:55 am

Posted in Quantum Computer

Quantum Cryptography Market Research Analysis Including Growth Factors, Types And Application By Regions From 2024 – Kentucky Journal 24

Posted: at 10:55 am


without comments

Overview:

Quantum cryptographyis a new method for secret communications that provides the assurance of security of digital data. Quantum cryptography is primarily based on the usage of individual particles/waves of light (photon) and their essential quantum properties for the development of an unbreakable cryptosystem, primarily because it is impossible to measure the quantum state of any system without disturbing that system.

Request For ReportSample@https://www.trendsmarketresearch.com/report/sample/9921

It is hypothetically possible that other particles could be used, but photons offer all the necessary qualities needed, the their behavior is comparatively understandable, and they are the information carriers in optical fiber cables, the most promising medium for very high-bandwidth communications.

Quantum computing majorly focuses on the growing computer technology that is built on the platform of quantum theory which provides the description about the nature and behavior of energy and matter at quantum level. The fame of quantum mechanics in cryptography is growing because they are being used extensively in the encryption of information. Quantum cryptography allows the transmission of the most critical data at the most secured level, which in turn, propels the growth of the quantum computing market. Quantum computing has got a huge array of applications.

Market Analysis:

According to Infoholic Research, the Global Quantum cryptography Market is expected to reach $1.53 billion by 2023, growing at a CAGR of around 26.13% during the forecast period. The market is experiencing growth due to the increase in the data security and privacy concerns. In addition, with the growth in the adoption of cloud storage and computing technologies is driving the market forward. However, low customer awareness about quantum cryptography is hindering the market growth. The rising demands for security solutions across different verticals is expected to create lucrative opportunities for the market.

Market Segmentation Analysis:

The report provides a wide-ranging evaluation of the market. It provides in-depth qualitative insights, historical data, and supportable projections and assumptions about the market size. The projections featured in the report have been derived using proven research methodologies and assumptions based on the vendors portfolio, blogs, whitepapers, and vendor presentations. Thus, the research report serves every side of the market and is segmented based on regional markets, type, applications, and end-users.

Countries and Vertical Analysis:

The report contains an in-depth analysis of the vendor profiles, which include financial health, business units, key business priorities, SWOT, strategy, and views; and competitive landscape. The prominent vendors covered in the report include ID Quantique, MagiQ Technologies, Nucrypt, Infineon Technologies, Qutools, QuintenssenceLabs, Crypta Labs, PQ Solutions, and Qubitekk and others. The vendors have been identified based on the portfolio, geographical presence, marketing & distribution channels, revenue generation, and significant investments in R&D.

Get Complete TOC with Tables andFigures@https://www.trendsmarketresearch.com/report/discount/9921

Competitive Analysis

The report covers and analyzes the global intelligent apps market. Various strategies, such as joint ventures, partnerships,collaborations, and contracts, have been considered. In addition, as customers are in search of better solutions, there is expected to be a rising number of strategic partnerships for better product development. There is likely to be an increase in the number of mergers, acquisitions, and strategic partnerships during the forecast period.

Companies such as Nucrypt, Crypta Labs, Qutools, and Magiq Technologies are the key players in the global Quantum Cryptography market. Nucrypt has developed technologies for emerging applications in metrology and communication. The company has also produced and manufactured electronic and optical pulsers. In addition, Crypta Labs deals in application security for devices. The company deals in Quantum Random Number Generator products and solutions and Internet of Things (IoT). The major sectors the company is looking at are transport, military and medical.

The report includes the complete insight of the industry, and aims to provide an opportunity for the emerging and established players to understand the market trends, current scenario, initiatives taken by the government, and the latest technologies related to the market. In addition, it helps the venture capitalists in understanding the companies better and to take informed decisions.

Regional Analysis

The Americas held the largest chunk of market share in 2017 and is expected to dominate the quantum cryptography market during the forecast period. The region has always been a hub for high investments in research and development (R&D) activities, thus contributing to the development of new technologies. The growing concerns for the security of IT infrastructure and complex data in America have directed the enterprises in this region to adopt quantum cryptography and reliable authentication solutions.

<<< Get COVID-19 Report Analysis >>>https://www.trendsmarketresearch.com/report/covid-19-analysis/9921

Benefits

The report provides an in-depth analysis of the global intelligent apps market aiming to reduce the time to market the products and services, reduce operational cost, improve accuracy, and operational performance. With the help of quantum cryptography, various organizations can secure their crucial information, and increase productivity and efficiency. In addition, the solutions are proven to be reliable and improve scalability. The report discusses the types, applications, and regions related to this market. Further, the report provides details about the major challenges impacting the market growth.

Read more here:

Quantum Cryptography Market Research Analysis Including Growth Factors, Types And Application By Regions From 2024 - Kentucky Journal 24

Written by admin

September 1st, 2020 at 10:55 am

Posted in Quantum Computer

Q-NEXT collaboration awarded National Quantum Initiative funding – University of Wisconsin-Madison

Posted: at 10:55 am


without comments

The University of WisconsinMadison solidified its standing as a leader in the field of quantum information science when the U.S. Department of Energy (DOE) and the White House announced the Q-NEXT collaboration as a funded Quantum Information Science Research Center through the National Quantum Initiative Act. The five-year, $115 million collaboration was one of five Centers announced today.

Q-NEXT, a next-generation quantum science and engineering collaboration led by the DOEs Argonne National Laboratory, brings together nearly 100 world-class researchers from three national laboratories, 10 universities including UWMadison, and 10 leading U.S. technology companies to develop the science and technology to control and distribute quantum information.

The main goals for Q-NEXT are first to deliver quantum interconnects to find ways to quantum mechanically connect distant objects, says Mark Eriksson, the John Bardeen Professor of Physics at UWMadison and a Q-NEXT thrust lead. And next, to establish a national resource to both develop and provide pristine materials for quantum science and technology.

Q-NEXT will focus on three core quantum technologies:

Eriksson is leading the Materials and Integration thrust, one of six Q-NEXT focus areas that features researchers from across the collaboration. This thrust aims to: develop high-coherence materials, including for silicon and superconducting qubits, which is an essential component of preserving entanglement; develop a silicon-based optical quantum memory, which is important in developing a quantum repeater; and improve color-center quantum bits, which are used in both communication and sensing.

One of the key goals in Materials and Integration is to not just improve the materials but also to improve how you integrate those materials together so that in the end, quantum devices maintain coherence and preserve entanglement, Eriksson says. The integration part of the name is really important. You may have a material that on its own is really good at preserving coherence, yet you only make something useful when you integrate materials together.

Six other UWMadison and Wisconsin Quantum Institute faculty members are Q-NEXT investigators: physics professors Victor Brar, Shimon Kolkowitz, Robert McDermott, and Mark Saffman, electrical and computer engineering professor Mikhail Kats, and chemistry professor Randall Goldsmith. UWMadison researchers are involved in five of the six research thrusts.

Im excited about Q-NEXT because of the connections and collaborations it provides to national labs, other universities, and industry partners, Eriksson says. When youre talking about research, its those connections that often lead to the breakthroughs.

The potential impacts of Q-NEXT research include the creation ofa first-ever National Quantum Devices Databasethat will promote the development and fabrication of next generation quantum devices as well as the development of the components and systems that enable quantum communications across distances ranging from microns to kilometers.

This funding helps ensure that the Q-NEXT collaboration will lead the way in future developments in quantum science and engineering, says Steve Ackerman, UWMadison vice chancellor for research and graduate education. Q-NEXT is the epitome of the Wisconsin Idea as we work together to transfer new quantum technologies to the marketplace and support U.S. economic competitiveness in this growing field.

Read more here:

Q-NEXT collaboration awarded National Quantum Initiative funding - University of Wisconsin-Madison

Written by admin

September 1st, 2020 at 10:55 am

Posted in Quantum Computer

This Equation Calculates The Chances We Live In A Computer Simulation – Discover Magazine

Posted: at 10:55 am


without comments

Credit: metamorworks/Shutterstock

Sign up for our email newsletter for the latest science news

The Drake equation is one of the more famous reckonings in science. It calculates the likelihood that we are not alone in the universe by estimating the number of other intelligent civilizations in our galaxy that might exist now.

Some of the terms in this equation are well known or becoming better understood, such as the number of stars in our galaxy and the proportion that have planets in the habitable zone. But others are unknown, such as the proportion of planets that develop intelligent life; and some may never be known such as the proportion that destroy themselves before they can be discovered.

Nevertheless, the Drake equation allows scientists to place important bounds on the numbers of intelligent civilizations that might be out there.

However, there is another sense in which humanity could be linked with an alien intelligenceour world may just be a simulation inside a massively powerful supercomputer run by such a species. Indeed, various scientists, philosophers and visionaries have said that the probability of such a scenario could be close to one. In other words, we probably are living in a simulation.

The accuracy of these claims is somewhat controversial. So a better way to determine the probability that we live in a simulation would be much appreciated.

Enter Alexandre Bibeau-Delisle and Gilles Brassard at the University of Montreal in Canada. These researchers have derived a Drake-like equation that calculates the chances that we live in a computer simulation. And the results throw up some counterintuitive ideas that are likely to change the way we think about simulations, how we might determine whether we are in one and whether we could ever escape.

Bibeau-Delisle and Brassard begin with a fundamental estimate of the computing power available to create a simulation. They say, for example, that a kilogram of matter, fully exploited for computation, could perform 10^50 operations per second.

By comparison, the human brain, which is also kilogram-sized, performs up to 10^16 operations per second. It may thus be possible for a single computer the mass of a human brain to simulate the real-time evolution of 1.4 10^25 virtual brains, they say.

In our society, a significant number of computers already simulate entire civilizations, in games such as Civilization VI, Hearts of Iron IV, Humankind and so. So it may be reasonable to assume that in a sufficiently advanced civilization, individuals will be able to run games that simulate societies like ours, populated with sentient conscious beings.

So an interesting question is this: of all the sentient beings in existence, what fraction are likely to be simulations? To derive the answer, Bibeau-Delisle and Brassard start with the total number of real sentient beings NRe, multiply that by the fraction with access to the necessary computing power fCiv; multiply this by the fraction of that power that is devoted to simulating consciousness fDed (because these beings are likely to be using their computer for other purposes too); and then multiply this by the number of brains they could simulate Rcal.

The resulting equation is this, where fSim is the fraction of simulated brains:

Here RCal is the huge number of brains that fully exploited matter should be able to simulate.

The sheer size of this number, ~10^25, pushes Bibeau-Delisle and Brassard towards an inescapable conclusion. It is mathematically inescapable from [the above] equation and the colossal scale of RCal that fSim 1 unless fCiv fDed 0, they say.

So there are two possible outcomes. Either we live in a simulation or a vanishingly small proportion of advanced computing power is devoted to simulating brains.

Its not hard to imagine why the second option might be true. A society of beings similar to us (but with a much greater technological development) could indeed decide it is not very ethical to simulate beings with enough precision to make them conscious while fooling them and keeping them cut-off from the real world, say Bibeau-Delisle and Brassard.

Another possibility is that advanced civilizations never get to the stage where their technology is powerful enough to perform these kinds of computations. Perhaps they destroy themselves through war or disease or climate change long before then. There is no way of knowing.

But suppose we are in a simulation. Bibeau-Delisle and Brassard ask whether we might escape while somehow hiding our intentions from our overlords. They assume that the simulating technology will be quantum in nature. If quantum phenomena are as difficult to compute on classical systems as we believe them to be, a simulation containing our world would most probably run on quantum computing power, they say.

This raises the possibility that it may be possible to detect our alien overlords since they cannot measure the quantum nature of our world without revealing their presence. Quantum cryptography uses the same principle; indeed, Brassard is one of the pioneers of this technology.

That would seem to make it possible for us to make encrypted plans that are hidden from the overlords, such as secretly transferring ourselves into our own simulations.

However, the overlords have a way to foil this. All they need to do is to rewire their simulation to make it look as if we are able to hide information, even though they are aware of it all the time. If the simulators are particularly angry at our attempted escape, they could also send us to a simulated hell, in which case we would at least have the confirmation we were truly living inside a simulation and our paranoia was not unjustified...conclude Bibeau-Delisle and Brassard, with their tongues firmly in their cheeks.

In that sense, we are the ultimate laboratory guinea pigs: forever trapped and forever fooled by the evil genius of our omnipotent masters.

Time for another game of Civilization VI.

Ref: arxiv.org/abs/2008.09275 : Probability and Consequences of Living Inside a Computer Simulation

Here is the original post:

This Equation Calculates The Chances We Live In A Computer Simulation - Discover Magazine

Written by admin

September 1st, 2020 at 10:55 am

Posted in Quantum Computer

I confess, I’m scared of the next generation of supercomputers – TechRadar

Posted: at 10:55 am


without comments

Earlier this year, a Japanese supercomputer built on Arm-based Fujitsu A64FX processors snatched the crown of worlds fastest machine, blowing incumbent leader IBM Summit out of the water.

Fugaku, as the machine is known, achieved 415.5 petaFLOPS by the popular High Performance Linpack (HPL) benchmark, which is almost three times the score of the IBM machine (148.5 petaFLOPS).

It also topped the rankings for Graph 500, HPL-AI and HPCH workloads - a feat never before achieved in the world of high performance computing (HPC).

Modern supercomputers are now edging ever-closer to the landmark figure of one exaFLOPS (equal to 1,000 petaFLOPS), commonly described as the exascale barrier. In fact, Fugaku itself can already achieve one exaFLOPS, but only in lower precision modes.

The consensus among the experts we spoke to is that a single machine will breach the exascale barrier within the next 6 - 24 months, unlocking a wealth of possibilities in the fields of medical research, climate forecasting, cybersecurity and more.

But what is an exaFLOPS? And what will it mean to break the exascale milestone, pursued doggedly for more than a decade?

To understand what it means to achieve exascale computing, its important to first understand what is meant by FLOPS, which stands for floating point operations per second.

A floating point operation is any mathematical calculation (i.e. addition, subtraction, multiplication or division) that involves a number containing a decimal (e.g. 3.0 - a floating point number), as opposed to a number without a decimal (e.g. 3 - a binary integer). Calculations involving decimals are typically more complex and therefore take longer to solve.

An exascale computer can perform 10^18 (one quintillion/100,000,000,000,000,000) of these mathematical calculations every second.

For context, to equal the number of calculations an exascale computer can process in a single second, an individual would have to perform one sum every second for 31,688,765,000 years.

The PC Im using right now, meanwhile, is able to reach 147 billion FLOPS (or 0.00000014723 exaFLOPS), outperforming the fastest supercomputer of 1993, the Intel Paragon (143.4 billion FLOPS).

This both underscores how far computing has come in the last three decades and puts into perspective the extreme performance levels attained by the leading supercomputers today.

The key to building a machine capable of reaching one exaFLOPS is optimization at the processing, storage and software layers.

The hardware must be small and powerful enough to pack together and reach the necessary speeds, the storage capacious and fast enough to serve up the data and the software scalable and programmable enough to make full use of the hardware.

For example, there comes a point at which adding more processors to a supercomputer will no longer affect its speed, because the application is not sufficiently optimized. The only way governments and private businesses will realize a full return on HPC hardware investment is through an equivalent investment in software.

Organizations such as the Exascale Computing Project (EPC) the ExCALIBUR programme are interested in solving precisely this problem. Those involved claim a renewed focus on algorithm and application development is required in order to harness the full power and scope of exascale.

Achieving the delicate balance between software and hardware, in an energy efficient manner and avoiding an impractically low mean time between failures (MTBF) score (the time that elapses before a system breaks down under strain) is the challenge facing the HPC industry.

15 years ago as we started the discussion on exascale, we hypothesized that it would need to be done in 20 mega-watts (MW); later that was changed to 40 MW. With Fugaku, we see that we are about halfway to a 64-bit exaFLOPS at the 40 MW power envelope, which shows that an exaFLOPS is in reach today, explained Brent Gorda, Senior Director HPC at UK-based chip manufacturer Arm.

We could hit an exaFLOPS now with sufficient funding to build and run a system. [But] the size of the system is likely to be such that MTBF is measured in single digit number-of-days based on todays technologies and the number of components necessary to reach these levels of performance.

When it comes to building a machine capable of breaching the exascale barrier, there are a number of other factors at play, beyond technological feasibility. An exascale computer can only come into being once an equilibrium has been reached at the intersection of technology, economics and politics.

One could in theory build an exascale system today by packing in enough CPUs, GPUs and DPUs. But what about economic viability? said Gilad Shainer of NVIDIA Mellanox, the firm behind the Infiniband technology (the fabric that links the various hardware components) found in seven of the ten fastest supercomputers.

Improvements in computing technologies, silicon processing, more efficient use of power and so on all help to increase efficiency and make exascale computing an economic objective as opposed to a sort of sporting achievement.

According to Paul Calleja, who heads up computing research at the University of Cambridge and is working with Dell on the Open Exascale Lab, Fugaku is an excellent example of what is theoretically possible today, but is also impractical by virtually any other metric.

If you look back at Japanese supercomputers, historically theres only ever been one of them made. They have beautifully exquisite architectures, but theyre so stupidly expensive and proprietary that no one else could afford one, he told TechRadar Pro.

[Japanese organizations] like these really large technology demonstrators, which are very useful in industry because it shows the direction of travel and pushes advancements, but those kinds of advancements are very expensive and not sustainable, scalable or replicable.

So, in this sense, there are two separate exascale landmarks; the theoretical barrier, which will likely be met first by a machine of Fugakus ilk (a technological demonstrator), and the practical barrier, which will see exascale computing deployed en masse.

Geopolitical factors will also play a role in how quickly the exascale barrier is breached. Researchers and engineers might focus exclusively on the technological feat, but the institutions and governments funding HPC research are likely motivated by different considerations.

Exascale computing is not just about reaching theoretical targets, it is about creating the ability to tackle problems that have been previously intractable, said Andy Grant, Vice President HPC & Big Data at IT services firm Atos, influential in the fields of HPC and quantum computing.

Those that are developing exascale technologies are not doing it merely to have the fastest supercomputer in the world, but to maintain international competitiveness, security and defence.

In Japan, their new machine is roughly 2.8x more powerful than the now-second place system. In broad terms, that will enable Japanese researchers to address problems that are 2.8x more complex. In the context of international competitiveness, that creates a significant advantage.

In years gone by, rival nations fought it out in the trenches or competed to see who could place the first human on the moon. But computing may well become the frontier at which the next arms race takes place; supremacy in the field of HPC might prove just as politically important as military strength.

Once exascale computers become an established resource - available for businesses, scientists and academics to draw upon - a wealth of possibilities will be unlocked across a wide variety of sectors.

HPC could prove revelatory in the fields of clinical medicine and genomics, for example, which require vast amounts of compute power to conduct molecular modelling, simulate interactions between compounds and sequence genomes.

In fact, IBM Summit and a host of other modern supercomputers are being used to identify chemical compounds that could contribute to the fight against coronavirus. The Covid-19 High Performance Computing Consortium assembled 16 supercomputers, accounting for an aggregate of 330 petaFLOPS - but imagine how much more quickly research could be conducted using a fleet of machines capable of reaching 1,000 petaFLOPS on their own.

Artificial intelligence, meanwhile, is another cross-disciplinary domain that will be transformed with the arrival of exascale computing. The ability to analyze ever-larger datasets will improve the ability of AI models to make accurate forecasts (contingent on the quality of data fed into the system) that could be applied to virtually any industry, from cybersecurity to e-commerce, manufacturing, logistics, banking, education and many more.

As explained by Rashid Mansoor, CTO at UK supercomputing startup Hadean, the value of supercomputing lies in the ability to make an accurate projection (of any variety).

The primary purpose of a supercomputer is to compute some real-world phenomenon to provide a prediction. The prediction could be the way proteins interact, the way a disease spreads through the population, how air moves over an aerofoil or electromagnetic fields interact with a spacecraft during re-entry, he told TechRadar Pro.

Raw performance such as the HPL benchmark simply indicates that we can model bigger and more complex systems to a greater degree of accuracy. One thing that the history of computing has shown us is that the demand for computing power is insatiable.

Other commonly cited areas that will benefit significantly from the arrival of exascale include brain mapping, weather and climate forecasting, product design and astronomy, but its also likely that brand new use cases will emerge as well.

The desired workloads and the technology to perform them form a virtuous circle. The faster and more performant the computers, the more complex problems we can solve and the faster the discovery of new problems, explained Shainer.

What we can be sure of is that we will see the continuous needs or ever growing demands for more performance capabilities in order to solve the unsolvable. Once this is solved, we will find the new unsolvable.

By all accounts, the exascale barrier will likely fall within the next two years, but the HPC industry will then turn its attention to the next objective, because the work is never done.

Some might point to quantum computers, which approach problem solving in an entirely different way to classical machines (exploiting symmetries to speed up processing), allowing for far greater scale. However, there are also problems to which quantum computing cannot be applied.

Mid-term (10 year) prospects for quantum computing are starting to shape up, as are other technologies. These will be more specialized where a quantum computer will very likely show up as an application accelerator for problems that relate to logistics first. They wont completely replace the need for current architectures for IT/data processing, explained Gorda.

As Mansoor puts it, on certain problems even a small quantum computer can be exponentially faster than all of the classical computing power on earth combined. Yet on other problems, a quantum computer could be slower than a pocket calculator.

The next logical landmark for traditional computing, then, would be one zettaFLOPS, equal to 1,000 exaFLOPS or 1,000,000 petaFLOPS.

Chinese researchers predicted in 2018 that the first zettascale system will come online in 2035, paving the way for new computing paradigms. The paper itself reads like science fiction, at least for the layman:

To realize these metrics, micro-architectures will evolve to consist of more diverse and heterogeneous components. Many forms of specialized accelerators are likely to co-exist to boost HPC in a joint effort. Enabled by new interconnect materials such as photonic crystal, fully optical interconnecting systems may come into use.

Assuming one exaFLOPS is reached by 2022, 14 years will have elapsed between the creation of the first petascale and first exascale systems. The first terascale machine, meanwhile, was constructed in 1996, 12 years before the petascale barrier was breached.

If this pattern were to continue, the Chinese researchers estimate would look relatively sensible, but there are firm question marks over the validity of zettascale projections.

While experts are confident in their predicted exascale timelines, none would venture a guess at when zettascale might arrive without prefacing their estimate with a long list of caveats.

Is that an interesting subject? Because to be honest with you, its so not obtainable. To imagine how we could go 1000x beyond [one exaFLOPS] is not a conversation anyone could have, unless theyre just making it up, said Calleja, asked about the concept of zettascale.

Others were more willing to theorize, but equally reticent to guess at a specific timeline. According to Grant, the way zettascale machines process information will be unlike any supercomputer in existence today.

[Zettascale systems] will be data-centric, meaning components will move to the data rather than the other way around, as data volumes are likely to be so large that moving data will be too expensive. Regardless, predicting what they might look like is all guesswork for now, he said.

It is also possible that the decentralized model might be the fastest route to achieving zettascale, with millions of less powerful devices working in unison to form a collective supercomputer more powerful than any single machine (as put into practice by the SETI Institute).

As noted by Saurabh Vij, CEO of distributed supercomputing firm Q Blocks, decentralized systems address a number of problems facing the HPC industry today, namely surrounding building and maintenance costs. They are also accessible to a much wider range of users and therefore democratize access to supercomputing resources in a way that is not otherwise possible.

There are benefits to a centralized architecture, but the cost and maintenance barrier overshadows them. [Centralized systems] also alienate a large base of customer groups that could benefit, he said.

We think a better way is to connect distributed nodes together in a reliable and secure manner. It wouldnt be too aggressive to say that, 5 years from now, your smartphone could be part of a giant distributed supercomputer, making money for you while you sleep by solving computational problems for industry, he added.

However, incentivizing network nodes to remain active for a long period is challenging and a high rate of turnover can lead to reliability issues. Network latency and capacity problems would also need to be addressed before distributed supercomputing can rise to prominence.

Ultimately, the difficulty in making firm predictions about zettascale lies in the massive chasm that separates present day workloads and HPC architectures from those that might exist in the future. From a contemporary perspective, its fruitless to imagine what might be made possible by a computer so powerful.

We might imagine zettascale machines will be used to process workloads similar to those tackled by modern supercomputers, only more quickly. But its possible - even likely - the arrival of zettascale computing will open doors that do not and cannot exist today, so extraordinary is the leap.

In a future in which computers are 2,000+ times as fast as the most powerful machine today, philosophical and ethical debate surrounding the intelligence of man versus machine are bound to be played out in greater detail - and with greater consequence.

It is impossible to directly compare the workings of a human brain with that of a computer - i.e. to assign a FLOPS value to the human mind. However, it is not insensible to ask how many FLOPS must be achieved before a machine reaches a level of performance that might be loosely comparable to the brain.

Back in 2013, scientists used the K supercomputer to conduct a neuronal network simulation using open source simulation software NEST. The team simulated a network made up of 1.73 billion nerve cells connected by 10.4 trillion synapses.

While ginormous, the simulation represented only 1% of the human brains neuronal network and took 40 minutes to replicate 1 seconds worth of neuronal network activity.

However, the K computer reached a maximum computational power of only 10 petaFLOPS. A basic extrapolation (ignoring inevitable complexities), then, would suggest Fugaku could simulate circa 40% of the human brain, while a zettascale computer would be capable of performing a full simulation many times over.

Digital neuromorphic hardware (supercomputers created specifically to simulate the human brain) like SpiNNaker 1 and 2 will also continue to develop in the post-exascale future. Instead of sending information from point A to B, these machines will be designed to replicate the parallel communication architecture of the brain, sending information simultaneously to many different locations.

Modern iterations are already used to help neuroscientists better understand the mysteries of the brain and future versions, aided by advances in artificial intelligence, will inevitably be used to construct a faithful and fully-functional replica.

The ethical debates that will arise with the arrival of such a machine - surrounding the perception of consciousness, the definition of thought and what an artificial uber-brain could or should be used for - are manifold and could take generations to unpick.

The inability to foresee what a zettascale computer might be capable of is also an inability to plan for the moral quandaries that might come hand-in-hand.

Whether a future supercomputer might be powerful enough to simulate human-like thought is not in question, but whether researchers should aspire to bringing an artificial brain into existence is a subject worthy of discussion.

See the article here:

I confess, I'm scared of the next generation of supercomputers - TechRadar

Written by admin

September 1st, 2020 at 10:55 am

Posted in Quantum Computer

Honeywell Wants To Show What Quantum Computing Can Do For The World – Forbes

Posted: August 14, 2020 at 11:51 pm


without comments

The race for quantum supremacy heated up in June, when Honeywell brought to market the worlds highest performing quantum computer. Honeywell claims it is more accurate (i.e., performs with less errors) than competing systems and that its performance will increase by an order of magnitude each year for the next five years.

Inside the chamber of Honeywells quantum computer

The beauty of quantum computing, says Tony Uttley, President of Honeywell Quantum Solutions, is that once you reach a certain level of accuracy, every time you add a qbit [the basic unit of quantum information] you double the computational capacity. So as the quantum computer scales exponentially, you can scale your problem set exponentially.

Tony Uttley, President, Honeywell Quantum Solutions

Uttley sees three distinct eras in the evolution of quantum computing. Today, we are in the emergent erayou can start to prove what kind of things work, what kind of algorithms show the most promise. For example, the Future Lab for Applied Research and Engineering (FLARE) group of JPMorgan Chase published a paper in June summarizing the results of running on the Honeywell quantum computer complex mathematical calculations used in financial trading applications.

The next era Uttley calls classically impractical, running computations on a quantum computer that typically are not run on todays (classical) computers because they take too long, consume too much power, and cost too much. Crossing the threshold from emergent to classically impractical is not very far away, he asserts, probably sometime in the next 18 to 24 months. This is when you build the trust with the organizations you work with that the answer that is coming from your quantum computer is the correct one, says Uttley.

The companies that understand the potential impact of quantum computing on their industries, are already looking at what it would take to introduce this new computing capability into their existing processes and what they need to adjust or develop from scratch, according to Uttley. These companies will be ready for the shift from emergent to classically impractical which is going to be a binary moment, and they will be able to take advantage of it immediately.

The last stage of the quantum evolution will be classically impossible"you couldnt in the timeframe of the universe do this computation on a classical best-performing supercomputer that you can on a quantum computer, says Uttley. He mentions quantum chemistry, machine learning, optimization challenges (warehouse routing, aircraft maintenance) as applications that will benefit from quantum computing. But what shows the most promise right now are hybrid [resources]you do just one thing, very efficiently, on a quantum computer, and run the other parts of the algorithm or calculation on a classical computer. Uttley predicts that for the foreseeable future we will see co-processing, combining the power of todays computers with the power of emerging quantum computing solutions.

You want to use a quantum computer for the more probabilistic parts [of the algorithm] and a classical computer for the more mundane calculationsthat might reduce the number of qbits needed, explains Gavin Towler, vice president and chief technology officer of Honeywell Performance Materials Technologies. Towler leads R&D activities for three of Honeywell's businesses: Advanced Materials (e.g., refrigerants), UOP (equipment and services for the oil and gas sector), and Process Automation (automation, control systems, software, for all the process industries). As such, he is the poster boy for a quantum computing lead-user.

Gavin Towler, Vice President and Chief Technology Officer, Honeywell Performance Materials and ... [+] Technologies

In the space of materials discovery, quantum computing is going to be critical. Thats not a might or could be. It is going to be the way people do molecular discovery, says Towler. Molecular simulation is used in the design of new molecules, requiring the designer to understand quantum effects. These are intrinsically probabilistic as are quantum computers, Towler explains.

An example he provides is a refrigerant Honeywell produces that is used in automotive air conditioning, supermarkets refrigeration, and homes. As the chlorinated molecules in the refrigerants were causing the hole in the Ozone layer, they were replaced by HFCs which later tuned out to be very potent greenhouse gasses. Honeywell already found a suitable replacement for the refrigerant used in automotive air conditioning, but is searching for similar solutions for other refrigeration applications. Synthesizing in the lab molecules that will prove to have no effect on the Ozone layer or global warming and will not be toxic or flammable is costly. Computer simulation replaces lab work but ideally, you want to have computer models that will screen things out to identify leads much faster, says Towler.

This is where the speed of a quantum computer will make a difference, starting with simple molecules like the ones found in refrigerants or in solvents that are used to remove CO2 from processes prevalent in the oil and gas industry. These are relatively simple molecules, with 10-20 atoms, amenable to be modeled with [todays] quantum computers, says Towler. In the future, he expects more powerful quantum computers to assist in developing vaccines and finding new drugs, polymers, biodegradable plastics, things that contain hundred and thousands of atoms.

There are three ways by which Towlers counterparts in other companies, the lead-users who are interested in experimenting with quantum computing, can currently access Honeywells solution: Run their program directly on Honeywells quantum computer; through Microsoft Azure Quantum services; and working with two startups that Honeywell has invested in, Cambridge Quantum Computing (CQC) and Zapata Computing, both assisting in turning business challenges into quantum computing and hybrid computing algorithms.

Honeywell brings to the quantum computing emerging market a variety of skills in multiple disciplines, with its decades-long experience with precision control systems possibly the most important one. Any at-scale quantum computer becomes a controls problem, says Uttley, and we have experience in some of the most complex systems integration problems in the world. These past experiences have prepared Honeywell to show what quantum computing can do for the world and to rapidly scale-up its solution. Weve built a big auditorium but we are filling out just a few seats right now and we have lots more seats to fill, Uttley sums up this point in time in Honeywells journey to quantum supremacy.

See the original post here:

Honeywell Wants To Show What Quantum Computing Can Do For The World - Forbes

Written by admin

August 14th, 2020 at 11:51 pm

Posted in Quantum Computer

Quantum Computing for the Next Generation of Computer Scientists and Researchers – Campus Technology

Posted: at 11:51 pm


without comments

C-Level View | Feature

A Q&A with Travis Humble

Travis Humble is a distinguished scientist and director of the Quantum Computing Institute at Oak Ridge National Laboratory. The institute is a lab-wide organization that brings together all of ORNL's capabilities to address the development of quantum computers. Humble is also an academic, holding a joint faculty appointment at the University of Tennessee, where he is an assistant professor with the Bredesen Center for Interdisciplinary Research and Graduate Education. In the following Q&A, Humble gives CT his unique perspectives on the advancement of quantum computing and its entry into higher education curricula and research.

"It's an exciting area that's largely understaffed. There are far more opportunities than there are people currently qualified to approach quantum computing." Travis Humble

Mary Grush: Working at the Oak Ridge National Laboratory as a scientist and at the University of Tennessee as an academic, you are in a remarkable position to watch both the development of the field of quantum computing and its growing importance in higher education curricula and research. First, let me ask about your role at the Bredesen Center for Interdisciplinary Research and Graduate Education. The Bredesen Center draws on resources from both ORNL and UT. Does the center help move quantum computing into the realm of higher education?

Travis Humble: Yes. The point of the Bredesen Center is to do interdisciplinary research, to educate graduate students, and to address the interfaces and frontiers of science that don't fall within the conventional departments.

For me, those objectives are strongly related to my role at the laboratory, where I am a scientist working in quantum information. And the joint work ORNL and UT do in quantum computing is training the next generation of the workforce that's going to be able to take advantage of the tools and research that we're developing at the laboratory.

Grush: Are ORNL and UT connected to bring students to the national lab to experience quantum computing?

Humble: They are so tightly connected that it works very well for us to have graduate students onsite performing research in these topics, while at the same time advancing their education through the university.

Grush: How does ORNL's Quantum Computing Institute, where you are director, promote quantum computing?

Humble: As part of my work with the Quantum Computing Institute, I manage research portfolios and direct resources towards our most critical needs at the moment. But I also use that responsibility as a gateway to get people involved with quantum computing: It's an exciting area that's largely understaffed. There are far more opportunities than there are people currently qualified to approach quantum computing.

The institute is a kind of storefront through which people from many different areas of science and engineering can become involved in quantum computing. It is there to help them get involved.

Grush: Let's get a bit of perspective on quantum computing why is it important?

Humble: Quantum computing is a new approach to the ways we could build computers and solve problems. This approach uses quantum mechanics that support the most fundamental theories of physics. We've had a lot of success in understanding quantum mechanics it's the technology that lasers, transistors, and a lot of things that we rely on today were built on.

But it turns out there's a lot of untapped potential there: We could take further advantage of some of the features of quantum physics, by building new types of technologies.

Here is the original post:

Quantum Computing for the Next Generation of Computer Scientists and Researchers - Campus Technology

Written by admin

August 14th, 2020 at 11:51 pm

Posted in Quantum Computer

Quantum mechanics is immune to the butterfly effect – The Economist

Posted: at 11:51 pm


without comments

That could help with the design of quantum computers

Aug 15th 2020

IN RAY BRADBURYs science-fiction story A Sound of Thunder, a character time-travels far into the past and inadvertently crushes a butterfly underfoot. The consequences of that minuscule change ripple through reality such that, upon the time-travellers return, the present has been dramatically changed.

The butterfly effect describes the high sensitivity of many systems to tiny changes in their starting conditions. But while it is a feature of classical physics, it has been unclear whether it also applies to quantum mechanics, which governs the interactions of tiny objects like atoms and fundamental particles. Bin Yan and Nikolai Sinitsyn, a pair of physicists at Los Alamos National Laboratory, decided to find out. As they report in Physical Review Letters, quantum-mechanical systems seem to be more resilient than classical ones. Strangely, they seem to have the capacity to repair damage done in the past as time unfolds.

To perform their experiment, Drs Yan and Sinitsyn ran simulations on a small quantum computer made by IBM. They constructed a simple quantum system consisting of qubitsthe quantum analogue of the familiar one-or-zero bits used by classical computers. Like an ordinary bit, a qubit can be either one or zero. But it can also exist in superposition, a chimerical mix of both states at once.

Having established the system, the authors prepared a particular qubit by setting its state to zero. That qubit was then allowed to interact with the others in a process called quantum scrambling which, in this case, mimics the effect of evolving a quantum system backwards in time. Once this virtual foray into the past was completed, the authors disturbed the chosen qubit, destroying its local information and its correlations with the other qubits. Finally, the authors performed a reversed scrambling process on the now-damaged system. This was analogous to running the quantum system all the way forwards in time to where it all began.

They then checked to see how similar the final state of the chosen qubit was to the zero-state it had been assigned at the beginning of the experiment. The classical butterfly effect suggests that the researchers meddling should have changed it quite drastically. In the event, the qubits original state had been almost entirely recovered. Its state was not quite zero, but it was, in quantum-mechanical terms, 98.3% of the way there, a difference that was deemed insignificant. The final output state after the forward evolution is essentially the same as the input state before backward evolution, says Dr Sinitsyn. It can be viewed as the same input state plus some small background noise. Oddest of all was the fact that the further back in simulated time the damage was done, the greater the rate of recoveryas if the quantum system was repairing itself with time.

The mechanism behind all this is known as entanglement. As quantum objects interact, their states become highly correlatedentangledin a way that serves to diffuse localised information about the state of one quantum object across the system as a whole. Damage to one part of the system does not destroy information in the same way as it would with a classical system. Instead of losing your work when your laptop crashes, having a highly entangled system is a bit like having back-ups stashed in every room of the house. Even though the information held in the disturbed qubit is lost, its links with the other qubits in the system can act to restore it.

The upshot is that the butterfly effect seems not to apply to quantum systems. Besides making life safe for tiny time-travellers, that may have implications for quantum computing, too, a field into which companies and countries are investing billions of dollars. We think of quantum systems, especially in quantum computing, as very fragile, says Natalia Ares, a physicist at the University of Oxford. That this result demonstrates that quantum systems can in fact be unexpectedly robust is an encouraging finding, and bodes well for potential future advances in the field.

This article appeared in the Science & technology section of the print edition under the headline "A flutter in time"

Read more:

Quantum mechanics is immune to the butterfly effect - The Economist

Written by admin

August 14th, 2020 at 11:51 pm

Posted in Quantum Computer

Major quantum computational breakthrough is shaking up physics and maths – The Conversation UK

Posted: at 11:51 pm


without comments

MIP* = RE is not a typo. It is a groundbreaking discovery and the catchy title of a recent paper in the field of quantum complexity theory. Complexity theory is a zoo of complexity classes collections of computational problems of which MIP* and RE are but two.

The 165-page paper shows that these two classes are the same. That may seem like an insignificant detail in an abstract theory without any real-world application. But physicists and mathematicians are flocking to visit the zoo, even though they probably dont understand it all. Because it turns out the discovery has astonishing consequences for their own disciplines.

In 1936, Alan Turing showed that the Halting Problem algorithmically deciding whether a computer program halts or loops forever cannot be solved. Modern computer science was born. Its success made the impression that soon all practical problems would yield to the tremendous power of the computer.

But it soon became apparent that, while some problems can be solved algorithmically, the actual computation will last long after our Sun will have engulfed the computer performing the computation. Figuring out how to solve a problem algorithmically was not enough. It was vital to classify solutions by efficiency. Complexity theory classifies problems according to how hard it is to solve them. The hardness of a problem is measured in terms of how long the computation lasts.

RE stands for problems that can be solved by a computer. It is the zoo. Lets have a look at some subclasses.

The class P consists of problems which a known algorithm can solve quickly (technically, in polynomial time). For instance, multiplying two numbers belongs to P since long multiplication is an efficient algorithm to solve the problem. The problem of finding the prime factors of a number is not known to be in P; the problem can certainly be solved by a computer but no known algorithm can do so efficiently. A related problem, deciding if a given number is a prime, was in similar limbo until 2004 when an efficient algorithm showed that this problem is in P.

Another complexity class is NP. Imagine a maze. Is there a way out of this maze? is a yes/no question. If the answer is yes, then there is a simple way to convince us: simply give us the directions, well follow them, and well find the exit. If the answer is no, however, wed have to traverse the entire maze without ever finding a way out to be convinced.

Such yes/no problems for which, if the answer is yes, we can efficiently demonstrate that, belong to NP. Any solution to a problem serves to convince us of the answer, and so P is contained in NP. Surprisingly, a million dollar question is whether P=NP. Nobody knows.

The classes described so far represent problems faced by a normal computer. But computers are fundamentally changing quantum computers are being developed. But if a new type of computer comes along and claims to solve one of our problems, how can we trust it is correct?

Imagine an interaction between two entities, an interrogator and a prover. In a police interrogation, the prover may be a suspect attempting to prove their innocence. The interrogator must decide whether the prover is sufficiently convincing. There is an imbalance; knowledge-wise the interrogator is in an inferior position.

In complexity theory, the interrogator is the person, with limited computational power, trying to solve the problem. The prover is the new computer, which is assumed to have immense computational power. An interactive proof system is a protocol that the interrogator can use in order to determine, at least with high probability, whether the prover should be believed. By analogy, these are crimes that the police may not be able to solve, but at least innocents can convince the police of their innocence. This is the class IP.

If multiple provers can be interrogated, and the provers are not allowed to coordinate their answers (as is typically the case when the police interrogates multiple suspects), then we get to the class MIP. Such interrogations, via cross examining the provers responses, provide the interrogator with greater power, so MIP contains IP.

Quantum communication is a new form of communication carried out with qubits. Entanglement a quantum feature in which qubits are spookishly entangled, even if separated makes quantum communication fundamentally different to ordinary communication. Allowing the provers of MIP to share an entangled qubit leads to the class MIP*.

It seems obvious that communication between the provers can only serve to help the provers coordinate lies rather than assist the interrogator in discovering truth. For that reason, nobody expected that allowing more communication would make computational problems more reliable and solvable. Surprisingly, we now know that MIP* = RE. This means that quantum communication behaves wildly differently to normal communication.

In the 1970s, Alain Connes formulated what became known as the Connes Embedding Problem. Grossly simplified, this asked whether infinite matrices can be approximated by finite matrices. This new paper has now proved this isnt possible an important finding for pure mathematicians.

In 1993, meanwhile, Boris Tsirelson pinpointed a problem in physics now known as Tsirelsons Problem. This was about two different mathematical formalisms of a single situation in quantum mechanics to date an incredibly successful theory that explains the subatomic world. Being two different descriptions of the same phenomenon it was to be expected that the two formalisms were mathematically equivalent.

But the new paper now shows that they arent. Exactly how they can both still yield the same results and both describe the same physical reality is unknown, but it is why physicists are also suddenly taking an interest.

Time will tell what other unanswered scientific questions will yield to the study of complexity. Undoubtedly, MIP* = RE is a great leap forward.

See more here:

Major quantum computational breakthrough is shaking up physics and maths - The Conversation UK

Written by admin

August 14th, 2020 at 11:51 pm

Posted in Quantum Computer


Page 3«..2345..»