Page 16«..10..15161718

Archive for the ‘Quantum Computer’ Category

Making Sense of the Science and Philosophy of Devs – The Ringer

Posted: April 16, 2020 at 6:48 am


without comments

Let me welcome you the same way Stewart welcomes Forest in Episode 7 of the Hulu miniseries Devs: with a lengthy, unattributed quote.

We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future, just like the past, would be present before its eyes.

Its a passage that sounds as if it could have come from Forest himself. But its not from Forest, or Katie, or evenas Katie might guess, based on her response to Stewarts Philip Larkin quoteShakespeare. Its from the French scholar and scientist Pierre-Simon Laplace, who wrote the idea down at the end of the Age of Enlightenment, in 1814. When Laplace imagined an omniscient intellectwhich has come to be called Laplaces demonhe wasnt even saying something original: Other thinkers beat him to the idea of a deterministic, perfectly predictable universe by decades and centuries (or maybe millennia).

All of which is to say that despite the futuristic setting and high-tech trappings of Devsthe eight-part Alex Garland opus that will reach its finale next weekthe series central tension is about as old as the abacus. But theres a reason the debate about determinism and free will keeps recurring: Its an existential question at the heart of human behavior. Devs doesnt answer it in a dramatically different way than the great minds of history have, but it does wrap up ancient, brain-breaking quandaries in a compelling (and occasionally kind of confusing) package. Garland has admitted as much, acknowledging, None of the ideas contained here are really my ideas, and its not that I am presenting my own insightful take. Its more Im saying some very interesting people have come up with some very interesting ideas. Here they are in the form of a story.

Devs is a watchable blend of a few engaging ingredients. Its a spy thriller that pits Russian agents against ex-CIA operatives. Its a cautionary, sci-fi polemic about a potentially limitless technology and the hubris of big tech. Like Garlands previous directorial efforts, Annihilation and Ex Machina, its also a striking aesthetic experience, a blend of brutalist compounds, sleek lines, lush nature, and an exciting, unsettling soundtrack. Most of all, though, its a meditation on age-old philosophical conundrums, served with a garnish of science. Garland has cited scientists and philosophers as inspirations for the series, so to unravel the riddles of Devs, I sought out some experts whose day jobs deal with the dilemmas Lily and Co. confront in fiction: a computer science professor who specializes in quantum computing, and several professors of philosophy.

There are many questions about Devs that we wont be able to answer. How high is Kentons health care premium? Is it distracting to work in a lab lit by a perpetually pulsing, unearthly golden glow? How do Devs programmers get any work done when they could be watching the worlds most riveting reality TV? Devs doesnt disclose all of its inner workings, but by the end of Episode 7, its pulled back the curtain almost as far as it can. The main mystery of the early episodeswhat does Devs do?is essentially solved for the viewer long before Lily learns everything via Katies parable of the pen in Episode 6. As the series proceeds, the spy stuff starts to seem incidental, and the characters motivations become clear. All that remains to be settled is the small matter of the intractable puzzles that have flummoxed philosophers for ages.

Heres what we know. Forest (Nick Offerman) is a tech genius obsessed with one goal: being reunited with his dead daughter, Amaya, who was killed in a car crash while her mother was driving and talking to Forest on the phone. (Hed probably blame himself for the accident if he believed in free will.) He doesnt disguise the fact that he hasnt moved on from Amaya emotionally: He names his company after her, uses her face for its logo, and, in case those tributes were too subtle, installs a giant statue of her at corporate HQ. (As a metaphor for the way Amaya continues to loom over his life, the statue is overly obvious, but at least it looks cool.) Together with a team of handpicked developers, Forest secretly constructs a quantum computer so powerful that, by the end of the penultimate episode, it can perfectly predict the future and reverse-project the past, allowing the denizens of Devs to tune in to any bygone event in lifelike clarity. Its Laplaces demon made real, except for the fact that its powers of perception fail past the point at which Lily is seemingly scheduled to do something that the computer cant predict.

I asked Dr. Scott Aaronson, a professor of computer science at the University of Texas at Austin (and the founding director of the schools Quantum Information Center) to assess Devs depiction of quantum computing. Aaronsons website notes that his research concentrates on the capabilities and limits of quantum computers, so hed probably be one of Forests first recruits if Amaya were an actual company. Aaronson, whom I previously consulted about the plausibility of the time travel in Avengers: Endgame, humored me again and watched Devs despite having been burned before by Hollywoods crimes against quantum mechanics. His verdict, unsurprisingly, is that the quantum computing in Devslike that of Endgame, which cites one of the same physicists (David Deutsch) that Garland said inspired himis mostly hand-wavy window dressing.

A quantum computer is a device that uses a central phenomenon of quantum mechanicsnamely, interference of amplitudesto solve certain problems with dramatically better scaling behavior than any known algorithm running on any existing computer could solve them, Aaronson says. If youre wondering what amplitudes are, you can read Aaronsons explanation in a New York Times op-ed he authored last October, shortly after Google claimed to have achieved a milestone called quantum supremacythe first use of a quantum computer to make a calculation far faster than any non-quantum computer could. According to Googles calculations, the task that its Sycamore microchip performed in a little more than three minutes would have taken 100,000 of the swiftest existing conventional computers 10,000 years to complete. Thats a pretty impressive shortcut, and were still only at the dawn of the quantum computing age.

However, that stat comes with a caveat: Quantum computers arent better across the board than conventional computers. The applications where a quantum computer dramatically outperforms classical computers are relatively few and specialized, Aaronson says. As far as we know today, theyd help a lot with prediction problems only in cases where the predictions heavily involve quantum-mechanical behavior. Potential applications of quantum computers include predicting the rate of a chemical reaction, factoring huge numbers and possibly cracking the encryption that currently protects the internet (using Shors algorithm, which is briefly mentioned on Devs), and solving optimization and machine learning problems. Notice that reconstructing what Christ looked like on the cross is not on this list, Aaronson says.

In other words, the objective that Forest is trying to achieve doesnt necessarily lie within the quantum computing wheelhouse. To whatever extent computers can help forecast plausible scenarios for the past or future at all (as we already have them do for, e.g., weather forecasting), its not at all clear to what extent a quantum computer even helpsone might simply want more powerful classical computers, Aaronson says.

Then theres the problem that goes beyond the question of quantum vs. conventional: Either kind of computer would require data on which to base its calculations, and the data set that the predictions and retrodictions in Devs would demand is inconceivably detailed. I doubt that reconstructing the remote past is really a computational problem at all, in the sense that even the most powerful science-fiction supercomputer still couldnt give you reliable answers if it lacked the appropriate input data, Aaronson says, adding, As far as we know today, the best that any computer (classical or quantum) could possibly do, even in principle, with any data we could possibly collect, is to forecast a range of possible futures, and a range of possible pasts. The data that it would need to declare one of them the real future or the real past simply wouldnt be accessible to humankind, but rather would be lost in microscopic puffs of air, radiation flying away from the earth into space, etc.

In light of the unimaginably high hurdle of gathering enough data in the present to reconstruct what someone looked or sounded like during a distant, data-free age, Forest comes out looking like a ridiculously demanding boss. We get it, dude: You miss Amaya. But how about patting your employees on the back for pulling off the impossible? The idea that chaos, the butterfly effect, sensitive dependence on initial conditions, exponential error growth, etc. mean that you run your simulation 2000 years into the past and you end up with only a blurry, staticky image of Jesus on the cross rather than a clear image, has to be, like, the wildest understatement in the history of understatements, Aaronson says. As for the future, he adds, Predicting the weather three weeks from now might be forever impossible.

The plot of this series is one that wouldve been totally, 100 percent familiar to the ancient Greeksjust swap out the quantum computer for the Delphic Oracle. Dr. Scott Aaronson, professor of computer science at the University of Texas at Austin

On top of all that, Aaronson says, The Devs headquarters is sure a hell of a lot fancier (and cleaner) than any quantum computing lab that Ive ever visited. (Does Kenton vacuum between torture sessions?) At least the computer more or less looks like a quantum computer.

OK, so maybe I didnt need to cajole a quantum computing savant into watching several hours of television to confirm that theres no way we can watch cavepeople paint. Garland isnt guilty of any science sins that previous storytellers havent committed many times. Whenever Aaronson has advised scriptwriters, theyve only asked him to tell them which sciencey words would make their preexisting implausible stories sound somewhat feasible. Its probably incredibly rare that writers would let the actual possibilities and limits of a technology drive their story, he says.

Although the show name-checks real interpretations of quantum mechanicsPenrose, pilot wave, many-worldsit doesnt deeply engage with them. The pilot wave interpretation holds that only one future is real, whereas many-worlds asserts that a vast number of futures are all equally real. But neither one would allow for the possibility of perfectly predicting the future, considering the difficulty of accounting for every variable. Garland is seemingly aware of how far-fetched his story is, because on multiple occasions, characters like Lily, Lyndon, and Stewart voice the audiences unspoken disbelief, stating that something or other isnt possible. Whenever they do, Katie or Forest is there to tell them that it is. Which, well, fine: Like Laplaces demon, Devs is intended as more of a thought experiment than a realistic scenario. As Katie says during her blue pill-red pill dialogue with Lily, Go with it.

We might as well go along with Garland, because any scientific liberties he takes are in service of the seriess deeper ideas. As Aaronson says, My opinion is that the show isnt really talking about quantum computing at allits just using it as a fancy-sounding buzzword. Really its talking about the far more ancient questions of determinism vs. indeterminism and predictability vs. unpredictability. He concludes, The plot of this series is one that wouldve been totally, 100 percent familiar to the ancient Greeksjust swap out the quantum computer for the Delphic Oracle. Aaronsonwho says he sort of likes Devs in spite of its quantum technobabblewould know: He wrote a book called Quantum Computing Since Democritus.

Speaking of Democritus, lets consult a few philosophers on the topic of free will. One of the most mind-bending aspects of Devs adherence to hard determinismthe theory that human behavior is wholly dictated by outside factorsis its insistence that characters cant change their behavior even if theyve seen the computers prediction of what theyre about to do. As Forest asks Katie, What if one minute into the future we see you fold your arms, and you say, Fuck the future. Im a magician. My magic breaks tram lines. Im not going to fold my arms. You put your hands in your pockets, and you keep them there until the clock runs out.

It seems as if she should be able to do what she wants with her hands, but Katie quickly shuts him down. Cause precedes effect, she says. Effect leads to cause. The future is fixed in exactly the same way as the past. The tram lines are real. Of course, Katie could be wrong: A character could defy the computers prediction in the finale. (Perhaps thats the mysterious unforeseeable event.) But weve already seen some characters fail to exit the tram. In an Episode 7 scenewhich, as Aaronson notes, is highly reminiscent of the VHS scene in Spaceballswe see multiple members of the Devs team repeat the same statements that theyve just heard the computer predict they would make a split second earlier. They cant help but make the prediction come true. Similarly, Lily ends up at Devs at the end of Episode 7, despite resolving not to.

Putting aside the implausibility of a perfect prediction existing at all, does it make sense that these characters couldnt deviate from their predicted course? Yes, according to five professors of philosophy I surveyed. Keep in mind what Garland has cited as a common criticism of his work: that the ideas I talk about are sophomoric because theyre the kinds of things that people talk about when theyre getting stoned in their dorm rooms. Were about to enter the stoned zone.

In this story, [the characters] are in a totally deterministic universe, says Ben Lennertz, an assistant professor of philosophy at Colgate University. In particular, the watching of the video of the future itself has been determined by the original state of the universe and the laws. Its not as if things were going along and the person was going to cross their arms, but then a non-deterministic miracle occurred and they were shown a video of what they were going to do. The watching of the video and the persons reaction is part of the same progression as the scene the video is of. In essence, the computer would have already predicted its own predictions, as well as every characters reaction to them. Everything that happens was always part of the plan.

Ohio Wesleyan Universitys Erin Flynn echoes that interpretation. The people in those scenes do what they do not despite being informed that they will do it, but (in part) because they have been informed that they will do it, Flynn says. (Think of Katie telling Lyndon that hes about to balance on the bridge railing.) This is not to say they will be compelled to conform, only that their knowledge presumably forms an important part of the causal conditions leading to their actions. When the computer sees the future, the computer sees that what they will do is necessitated in part by this knowledge. The computer would presumably have made different predictions had people never heard them.

Furthermore, adds David Landy of San Francisco State University, the fact that we see something happen one way doesnt mean that it couldnt have happened otherwise. Suppose we know that some guy is going to fold his arms, Landy says. Does it follow that he lacks the ability to not fold his arms? Well, no, because what we usually mean by has the ability to not fold his arms is that if things had gone differently, he wouldnt have folded his arms. But by stipulating at the start that he is going to fold his arms, we also stipulate that things arent going to go differently. But it can remain true that if they did go differently, he would not have folded his arms. So, he might have that ability, even if we know he is not going to exercise it.

We should expect weird things to happen when we are talking about a very weird situation. David Landy, San Francisco State University professor

If your head has started spinning, you can see why the Greeks didnt settle this stuff long before Garland got to it. And if it still seems strange that Forest seemingly cant put his hands in his pockets, well, what doesnt seem strange in the world of Devs? We should expect weird things to happen when we are talking about a very weird situation, Landy says. That is, we are used to people reliably doing what they want to do. But we have become used to that by making observations in a certain environment: one without time travel or omniscient computers. Introducing those things changes the environment, so we shouldnt be surprised if our usual inferences no longer hold.

Heres where we really might want to mime a marijuana hit. Neal Tognazzini of Western Washington University points out that one could conceivably appear to predict the future by tapping into a future that already exists. Many philosophers reject determinism but nevertheless accept that there are truths about what will happen in the future, because they accept a view in the philosophy of time called eternalism, which is (roughly) the block universe ideapast, present, and future are all parts of reality, Tognazzini says. This theory says that the past and the future exist some temporal distance from the presentwe just havent yet learned to travel between them. Thus, Tognazzini continues, You can accept eternalism about time without accepting determinism, because the first is just a view about whether the future is real whereas the second is a view about how the future is connected to the past (i.e., whether there are tram lines).

According to that school of thought, the future isnt what has to happen, its simply what will happen. If we somehow got a glimpse of our futures from the present, it might appear as if our paths were fixed. But those futures actually would have been shaped by our freely chosen actions in the interim. As Tognazzini says, Its a fate of our own makingwhich is just to say, no fate at all.

If we accept that the members of Devs know what theyre doing, though, then the computers predictions are deterministic, and the past does dictate the future. Thats disturbing, because it seemingly strips us of our agency. But, Tognazzini says, Even then, its still the case that what we do now helps to shape that future. We still make a difference to what the future looks like, even if its the only difference we could have made, given the tram lines we happen to be on. Determinism isnt like some force that operates independently of what we want, making us marionettes. If its true, then it would apply equally to our mental lives as well, so that the future that comes about might well be exactly the future we wanted.

This is akin to the compatibilist position espoused by David Hume, which seeks to reconcile the seemingly conflicting concepts of determinism and free will. As our final philosopher, Georgetown Universitys William Blattner, says, If determinism is to be plausible, it must find a way to save the appearances, in this case, explain why we feel like were choosing, even if at some level the choice is an illusion. The compatibilist perspective concedes that there may be only one possible future, but, Flynn says, insists that there is a difference between being causally determined (necessitated) to act and being forced or compelled to act. As long as one who has seen their future does not do what has been predicted because they were forced to do it (against their will, so to speak), then they will still have done it freely.

In the finale, well find out whether the computers predictions are as flawless and inviolable as Katie claims. Well also likely learn one of Devs most closely kept secrets: What Forest intends to do with his perfect model of Amaya. The show hasnt hinted that the computer can resurrect the dead in any physical fashion, so unless Forest is content to see his simulated daughter on a screen, he may try to enter the simulation himself. In Episode 7, Devs seemed to set the stage for such a step; as Stewart said, Thats the reality right there. Its not even a clone of reality. The box contains everything.

Would a simulated Forest, united with his simulated daughter, be happier inside the simulation than he was in real life, assuming hes aware hes inside the simulation? The philosopher Robert Nozick explored a similar question with his hypothetical experience machine. The experience machine would stimulate our brains in such a way that we could supply as much pleasure as we wanted, in any form. It sounds like a nice place to visit, and yet most of us wouldnt want to live there. That reluctance to enter the experience machine permanently seems to suggest that we see some value in an authentic connection to reality, however unpleasurable. Thinking Im hanging out with my family and friends is just different from actually hanging out with my family and friends, Tognazzini says. And since I think relationships are key to happiness, Im skeptical that we could be happy in a simulation.

If reality were painful enough, though, the relief from that pain might be worth the sacrifice. Suppose, for instance, that the real world had become nearly uninhabitable or otherwise full of misery, Flynn says. It seems to me that life in a simulation might be experienced as a sanctuary. Perhaps ones experience there would be tinged with sadness for the lost world, but Im not sure knowing its a simulation would necessarily keep one from being happy in it. Forest still seems miserable about Amaya IRL, so for him, that trade-off might make sense.

Whats more, if real life is totally deterministic, then Forest may not draw a distinction between life inside and outside of his quantum computer. If freedom is a critical component of fulfillment, then its hard to see how we could be fulfilled in a simulation, Blattner says. But for Forest, freedom isnt an option anywhere. Something about the situation seems sad, maybe pathetic, maybe even tragic, Flynn says. But if the world is a true simulation in the matter described, why not just understand it as the ability to visit another real world in which his daughter exists?

Those who subscribe to the simulation hypothesis believe that what we think of as real lifeincluding my experience of writing this sentence and your experience of reading itis itself a simulation created by some higher order of being. In our world, it may seem dubious that such a sophisticated creation could exist (or that anything or anyone would care to create it). But in Forests world, a simulation just as sophisticated as real life already exists inside Devswhich means that what Forest perceives as real life could be someone elses simulation. If hes possibly stuck inside a simulation either way, he might as well choose the one with Amaya (if he has a choice at all).

Garland chose to tell this story on TV because on the big screen, he said, it would have been slightly too truncated. On the small screen, its probably slightly too long: Because weve known more than Lily all along, what shes learned in later episodes has rehashed old info for us. Then again, Devs has felt familiar from the start. If Laplace got a pass for recycling Cicero and Leibniz, well give Garland a pass for channeling Laplace. Whats one more presentation of a puzzle thats had humans flummoxed forever?

Read more:

Making Sense of the Science and Philosophy of Devs - The Ringer

Written by admin

April 16th, 2020 at 6:48 am

Posted in Quantum Computer

D-Wave makes its quantum computers free to anyone working on the coronavirus crisis – VentureBeat

Posted: April 2, 2020 at 7:49 am


without comments

D-Wave today made its quantum computers available for free to researchers and developers working on responses to the coronavirus (COVID-19) crisis. D-Wave partners and customers Cineca, Denso, Forschungszentrum Jlich, Kyocera, MDR, Menten AI, NEC, OTI Lumionics, QAR Lab at LMU Munich, Sigma-i, Tohoku University, and Volkswagen are also offering to help. They will provide access to their engineering teams with expertise on how to use quantum computers, formulate problems, and develop solutions.

Quantum computing leverages qubits to perform computations that would be much more difficult, or simply not feasible, for a classical computer. Based in Burnaby, Canada, D-Wave was the first company to sell commercial quantum computers, which are built to use quantum annealing. D-Wave says the move to make access free is a response to a cross-industry request from the Canadian government for solutions to the COVID-19 pandemic. Free and unlimited commercial contract-level access to D-Waves quantum computers is available in 35 countries across North America, Europe, and Asia via Leap, the companys quantum cloud service. Just last month, D-Wave debuted Leap 2, which includes a hybrid solver service and solves problems of up to 10,000 variables.

D-Wave and its partners are hoping the free access to quantum processing resources and quantum expertise will help uncover solutions to the COVID-19 crisis. We asked the company if there were any specific use cases it is expecting to bear fruit. D-Wave listed analyzing new methods of diagnosis, modeling the spread of the virus, supply distribution, and pharmaceutical combinations. D-Wave CEO Alan Baratz added a few more to the list.

The D-Wave system, by design, is particularly well-suited to solve a broad range of optimization problems, some of which could be relevant in the context of the COVID-19 pandemic, Baratz told VentureBeat. Potential applications that could benefit from hybrid quantum/classical computing include drug discovery and interactions, epidemiological modeling, hospital logistics optimization, medical device and supply manufacturing optimization, and beyond.

Earlier this month, Murray Thom, D-Waves VP of software and cloud services, told us quantum computing and machine learning are extremely well matched. In todays press release, Prof. Dr. Kristel Michielsen from the Jlich Supercomputing Centre seemed to suggest a similar notion: To make efficient use of D-Waves optimization and AI capabilities, we are integrating the system into our modular HPC environment.

Read more:

D-Wave makes its quantum computers free to anyone working on the coronavirus crisis - VentureBeat

Written by admin

April 2nd, 2020 at 7:49 am

Posted in Quantum Computer

We’re Getting Closer to the Quantum Internet, But What Is It? – HowStuffWorks

Posted: at 7:49 am


without comments

Advertisement

Back in February 2020, scientists from the U.S. Department of Energy's Argonne National Laboratory and the University of Chicago revealed that they had achieved a quantum entanglement in which the behavior of a pair two tiny particles becomes linked, so that their states are identical over a 52-mile (83.7 kilometer) quantum-loop network in the Chicago suburbs.

You may be wondering what all the fuss is about, if you're not a scientist familiar with quantum mechanics that is, the behavior of matter and energy at the smallest scale of reality, which is peculiarly different from the world we can see around us.

But the researchers' feat could be an important step in the development of a new, vastly more powerful version of the internet in the next few decades. Instead of the bits that today's network uses, which can only express a value of either 0 or 1, the future quantum internet would utilize qubits of quantum information, which can take on an infinite number of values. (A quibit is the unit of information for a quantum computer; it's like a bit in an ordinary computer).

That would give the quantum internet way more bandwidth, which would make it possible to connect super-powerful quantum computers and other devices and run massive applications that simply aren't possible with the internet we have now.

"A quantum internet will be the platform of a quantum ecosystem, where computers, networks, and sensors exchange information in a fundamentally new manner where sensing, communication, and computing literally work together as one entity, " explains David Awschalom via email. He's a spintronics and quantum information professor in the Pritzker School of Molecular Engineering at the University of Chicago and a senior scientist at Argonne, who led the quantum-loop project.

So why do we need this and what does it do? For starters, the quantum internet is not a replacement of the regular internet we now have. Rather it would be a complement to it or a branch of it. It would be able to take care of some of the problems that plague the current internet. For instance, a quantum internet would offer much greater protection from hackers and cybercriminals. Right now, if Alice in New York sends a message to Bob in California over the internet, that message travels in more or less a straight line from one coast to the other. Along the way, the signals that transmit the message degrade; repeaters read the signals, amplify and correct the errors. But this process allows hackers to "break in" and intercept the message.

However, a quantum message wouldn't have that problem. Quantum networks use particles of light photons to send messages which are not vulnerable to cyberattacks. Instead of encrypting a message using mathematical complexity, says Ray Newell, a researcher at Los Alamos National Laboratory, we would rely upon the peculiar rules of quantum physics. With quantum information, "you can't copy it or cut it in half, and you can't even look at it without changing it." In fact, just trying to intercept a message destroys the message, as Wired magazine noted. That would enable encryption that would be vastly more secure than anything available today.

"The easiest way to understand the concept of the quantum internet is through the concept of quantum teleportation," Sumeet Khatri, a researcher at Louisiana State University in Baton Rouge, says in an email. He and colleagues have written a paper about the feasibility of a space-based quantum internet, in which satellites would continually broadcast entangled photons down to Earth's surface, as this Technology Review article describes.

"Quantum teleportation is unlike what a non-scientist's mind might conjure up in terms of what they see in sci-fi movies, " Khatri says. "In quantum teleportation, two people who want to communicate share a pair of quantum particles that are entangled. Then, through a sequence of operations, the sender can send any quantum information to the receiver (although it can't be done faster than light speed, a common misconception). This collection of shared entanglement between pairs of people all over the world essentially constitutes the quantum internet. The central research question is how best to distribute these entangled pairs to people distributed all over the world. "

Once it's possible to do that on a large scale, the quantum internet would be so astonishingly fast that far-flung clocks could be synchronized about a thousand times more precisely than the best atomic clocks available today, as Cosmos magazine details. That would make GPS navigation vastly more precise than it is today, and map Earth's gravitational field in such detail that scientists could spot the ripple of gravitational waves. It also could make it possible to teleport photons from distant visible-light telescopes all over Earth and link them into a giant virtual observatory.

"You could potentially see planets around other stars, " says Nicholas Peters, group leader of the Quantum Information Science Group at Oak Ridge National Laboratory.

It also would be possible for networks of super-powerful quantum computers across the globe to work together and create incredibly complex simulations. That might enable researchers to better understand the behavior of molecules and proteins, for example, and to develop and test new medications.

It also might help physicists to solve some of the longstanding mysteries of reality. "We don't have a complete picture of how the universe works," says Newell. "We have a very good understanding of how quantum mechanics works, but not a very clear picture of the implications. The picture is blurry where quantum mechanics intersects with our lived experience."

But before any of that can happen, researchers have to figure out how to build a quantum internet, and given the weirdness of quantum mechanics, that's not going to be easy. "In the classical world you can encode information and save it and it doesn't decay, " Peters says. "In the quantum world, you encode information and it starts to decay almost immediately. "

Another problem is that because the amount of energy that corresponds to quantum information is really low, it's difficult to keep it from interacting with the outside world. Today, "in many cases, quantum systems only work at very low temperatures," Newell says. "Another alternative is to work in a vacuum and pump all the air out. "

In order to make a quantum internet function, Newell says, we'll need all sorts of hardware that hasn't been developed yet. So it's hard to say at this point exactly when a quantum internet would be up and running, though one Chinese scientist has envisioned that it could happen as soon as 2030.

Here is the original post:

We're Getting Closer to the Quantum Internet, But What Is It? - HowStuffWorks

Written by admin

April 2nd, 2020 at 7:49 am

Posted in Quantum Computer

Q-CTRL to Host Live Demos of ‘Quantum Control’ Tools – Quantaneo, the Quantum Computing Source

Posted: at 7:49 am


without comments

Q-CTRL, a startup that applies the principles of control engineering to accelerate the development of the first useful quantum computers, will host a series of online demonstrations of new quantum control tools designed to enhance the efficiency and stability of quantum computing hardware.

Dr. Michael Hush, Head of Quantum Science and Engineering at Q-CTRL, will provide an overview of the companys cloud-based quantum control engineering software called BOULDER OPAL. This software uses custom machine learning algorithms to create error-robust logical operations in quantum computers. The team will demonstrate - using real quantum computing hardware in real time - how they reduce susceptibility to error by 100X and improve hardware stability in time by 10X, while reducing time-to-solution by 10X against existing software.

Scheduled to accommodate the global quantum computing research base, the demonstrations will take place:

April 16 from 4-4:30 p.m. U.S. Eastern Time (ET) April 21 from 10-10:30 a.m. Singapore Time (SGT) April 23 from 10-10:30 a.m. Central European Summer Time (CEST) To register, visit https://go.q-ctrl.com/l/791783/2020-03-19/dk83

Released in Beta by Q-CTRL in March, BOULDER OPAL is an advanced Python-based toolkit for developers and R&D teams using quantum control in their hardware or theoretical research. Technology agnostic and with major computational grunt delivered seamlessly via the cloud, BOULDER OPAL enables a range of essential tasks which improve the performance of quantum computing and quantum sensing hardware. This includes the efficient identification of sources of noise and error, calculating detailed error budgets in real lab environments, creating new error-robust logic operations for even the most complex quantum circuits, and integrating outputs directly into real hardware.

The result for users is greater performance from todays quantum computing hardware, without the need to become an expert in quantum control engineering.

Experimental validations and an overview of the software architecture, developed in collaboration with the University of Sydney, were recently released in an online technical manuscript titled Software Tools for Quantum Control: Improving Quantum Computer Performance through Noise and Error Suppression.

See the original post here:

Q-CTRL to Host Live Demos of 'Quantum Control' Tools - Quantaneo, the Quantum Computing Source

Written by admin

April 2nd, 2020 at 7:49 am

Posted in Quantum Computer

Disrupt The Datacenter With Orchestration – The Next Platform

Posted: at 7:49 am


without comments

Since 1965, the computer industry has relied on Moores Law to accelerate innovation, pushing more transistors into integrated circuits to improve computation performance. Making transistors smaller helped lift all boats for the entire industry and enable new applications. At some point, we will reach a physical limit that is, a limit stemming from physics itself. Even with this setback, improvements kept on pace thanks to increased parallelism of computation and consolidation of specialized functions into single chip packages, such as systems on chip).

In recent years, we are nearing another peak. This article proposes to improve computation performance not only by building better hardware, but by changing how we use existing hardware. More specifically, the focusing on how we use existing processor types. I call this approach Compute Orchestration: automatic optimization of machine code to best use the modern datacenter hardware (again, with special emphasis on different processor types).

So what is compute orchestration? It is the embracing of hardware diversity to support software.

There are many types of processors: Microprocessors in small devices, general purpose CPUs in computers and servers, GPUs for graphics and compute, and programmable hardware like FPGAs. In recent years, specialized processors like TPUs and neuromorphic processors for machine learning are rapidly entering the datacenter.

There is potential in this variety: Instead of statically utilizing each processor for pre-defined functions, we can use existing processors as a swarm, each processor working on the most suitable workloads. Doing that, we can potentially deliver more computation bandwidth with less power, lower latency and lower total cost of ownership).

Non-standard utilization of existing processors is already happening: GPUs, for example, were already adapted from processors dedicated to graphics into a core enterprise component. Today, GPUs are used for machine learning and cryptocurrency mining, for example.

I call the technology to utilize the processors as a swarm Compute Orchestration. Its tenets can be described in four simple bullets:

Compute orchestration is, in short, automatic adaptation of binary code and automatic allocation to the most suitable processor types available. I split the evolution of compute orchestration into four generations:

Compute Orchestration Gen 1: Static Allocation To Specialized Co-Processors

This type of compute orchestration is everywhere. Most devices today include co-processors to offload some specialized work from the CPU. Usually, the toolchain or runtime environment takes care of assigning workloads to the co-processor. This is seamless to the developer, but also limited in functionality.

Best known example is the use of cryptographic co-processors for relevant functions. Being liberal in our definitions of co-processor, Memory Management Units (MMUs) to manage virtual memory address translation can also be considered an example.

Compute Orchestration Gen 2: Static Allocation, Heterogeneous Hardware

This is where we are at now. In the second generation, the software relies on libraries, dedicated run time environments and VMs to best use the available hardware. Lets call the collection of components that help better use the hardware frameworks. Current frameworks implement specific code to better use specific processors. Most prevalent are frameworks that know how to utilize GPUs in the cloud. Usually, better allocation to bare metal hosts remains the responsibility of the developer. For example, the developer/DevOps engineer needs to make sure a machine with GPU is available for the relevant microservice. This phenomenon is what brought me to think of Compute Orchestration in the first place, as it proves there is more slack in our current hardware.

Common frameworks like OpenCL allow programming compute kernels to run on different processors. TensorFlow allows assigning nodes in a computation graph to different processors (devices).

This better use of hardware by using existing frameworks is great. However, I believe there is a bigger edge. Existing frameworks still require effort from the developer to be optimal they rely on the developer. Also, no legacy code from 2016 (for example) is ever going to utilize a modern datacenter GPU cluster. My view is that by developing automated and dynamic frameworks, that adapt to the hardware and workload, we can achieve another leap.

Compute Orchestration Gen 3: Dynamic Allocation To Heterogeneous Hardware

Computation can take an example from the storage industry: Products for better utilization and reliability of storage hardware have innovated for years. Storage startups develop abstraction layers and special filesystems that improve efficiency and reliability of existing storage hardware. Computation, on the other hand, remains a stupid allocation of hardware resources. Smart allocation of computation workloads to hardware could result in better performance and efficiency for big data centers (for example hyperscalers like cloud providers). The infrastructure for such allocation is here, with current data center designs pushing to more resource disaggregation, introduction of diverse accelerators, and increased work on automatic acceleration (for example: Workload-aware Automatic Parallelization for Multi-GPU DNN Training).

For high level resource management, we already have automatic allocation. For example, project Mesos (paper) focusing on fine-grained resource sharing, Slurm for cluster management, and several extensions using Kubernetes operators.

To further advance from here would require two steps: automatic mapping of available processors (which we call the compute environment) and workload adaptation. Imagine a situation where the developer doesnt have to optimize her code to the hardware. Rather, the runtime environment identifies the available processing hardware and automatically optimizes the code. Cloud environments are heterogeneous and changing, and the code should change accordingly (in fact its not the code, but the execution model in the run time environment of the machine code).

Compute Orchestration Gen 4: Automatic Allocation To Dynamic Hardware

A thought, even a possibility, can shatter and transform us. Friedrich Wilhelm Nietzsche

The quote above is to say that there we are far from practical implementation of the concept described here (as far as I know). We can, however, imagine a technology that dynamically re-designs a data center to serve needs of running applications. This change in the way whole data centers meet computation needs as already started. FGPAs are used more often and appear in new places (FPGAs in hosts, FPGA machines in AWS, SmartNICs), providing the framework for constant reconfiguration of hardware.

To illustrate the idea, I will use an example: Microsoft initiated project Catapult, augmenting CPUs with an interconnected and configurable compute layer composed of programmable silicon. The timeline in the projects website is fascinating. The project started off in 2010, aiming to improve search queries by using FPGAs. Quickly, it proposed the use of FPGAs as bumps in the wire, adding computation in new areas of the data path. Project Catapult also designed an architecture for using FPGAs as a distributed resource pool serving all the data center. Then, the project spun off Project BrainWave, utilizing FPGAs for accelerating AI/ML workloads.

This was just an example of innovation in how we compute. Quick online search will bring up several academic works on the topic. All we need to reach the 4th generation is some idea synthesis, combining a few concepts together:

Low effort HDL generation (for example Merlin compiler, BORPH)

In essence, what I am proposing is to optimize computation by adding an abstraction layer that:

Automatic allocation on agile hardware is the recipe for best utilizing existing resources: faster, greener, cheaper.

The trends and ideas mentioned in this article can lead to many places. It is very likely, that we are already working with existing hardware in the optimal way. It is my belief that we are in the midst of the improvement curve. In recent years, we had increased innovation in basic hardware building blocks, new processors for example, but we still have room to improve in overall allocation and utilization. The more we deploy new processors in the field, the more slack we have in our hardware stack. New concepts, like edge computing and resource disaggregation, bring new opportunities for optimizing legacy code by smarter execution. To achieve that, legacy code cant be expected to be refactored. Developers and DevOps engineers cant be expected to optimize for the cloud configuration. We just need to execute code in a smarter way and that is the essence of compute orchestration.

The conceptual framework described in this article should be further explored. We first need to find the killer app (what type of software we optimize to which type of hardware). From there, we can generalize. I was recently asked in a round table what is the next generation of computation? Quantum computing? Tensor Processor Units? I responded that all of the above, but what we really need is better usage of the existing generation.

Guy Harpak is the head of technology at Mercedes-Benz Research & Devcelopment in its Tel Aviv, Israel facility. Please feel free to contact him on any thoughts on the topics above at harpakguy@gmail.com. Harpak notes that this contributed article reflects his personal opinion and is in no way related to people or companies that he works with or for.

Related Reading: If you find this article interesting, I would recommend researching the following topics:

Some interesting articles on similar topics:

Return Of The Runtimes: Rethinking The Language Runtime System For The Cloud 3.0 Era

The Deep Learning Revolution And Its Implications For Computer Architecture And Chip Design (by Jeffrey Dean from Google Research)

Beyond SmartNICs: Towards A Fully Programmable Cloud

Hyperscale Cloud: Reimagining Datacenters From Hardware To Applications

Read more from the original source:

Disrupt The Datacenter With Orchestration - The Next Platform

Written by admin

April 2nd, 2020 at 7:49 am

Posted in Quantum Computer

Quantum Computing: Will It Actually Produce Jobs? – Dice Insights

Posted: March 19, 2020 at 1:52 pm


without comments

If youre interested in tech, youve likely heard about the race to develop quantum computers. These systems compute via qubits, which exist not only as ones and zeros (as you find in traditional processors) but also in an in-between state known as superposition.

For tasks such as cryptography, qubits and superposition would allow a quantum computer to analyze every potential solution simultaneously, making such systems much faster than conventional computers. Microsoft, Google, IBM, and other firms are all throwing tons of resources into quantum-computing research, hoping for a breakthrough that will make them a leader in this nascent industry.

Questions abound about quantum computing, including whether these systems will actually produce the answers that companies really need. For those in the tech industry, theres a related interest in whether quantum computing will actually produce jobs at scale.

The large tech companies and research laboratories who are leading the charge on R&D in the pure quantum computing hardware space are looking for people with advanced degrees in key STEM fields like physics, math and engineering, said John Prisco, President & CEO of Quantum Xchange, which markets a quantum-safe key distribution that supposedly will bridge the gap between traditional encryption solutions and quantum computing-driven security. This is in large part because there are few programs today that actually offer degrees or specializations in quantum technology.

When Prisco was in graduate school, he added, There were four of us in the electrical engineering program with the kind of physics training this field calls for. More recently, Ive recently seen universities like MIT and Columbia investing in offering this training to current students, but its going to take awhile to produce experts.

Theres every chance that increased demand for quantum-skilled technologists could drive even more universities to spin up the right kind of training and education programs. The National Institute of Standards and Technology (NIST) is evaluating post-quantum cryptography that would replace existing methods, including public-key RSA encryption methods. Time is of the essence when it comes to governments and companies coming up with these post-quantum algorithms; the next evolutions in cryptography will render the current generation pretty much obsolete.

Combine that quest with the current shortage of trained cybersecurity professionals, and you start to see where the talent and education crunch will hit over the next several years. While hackers weaponizing quantum computers themselves is still a far off proposal, the threat of harvesting attacks, where nefarious actors steal encrypted data now to decrypt later once quantum computers are available, is already here, Prisco said, pointing at Chinas 2015 hack of the U.S. Office of Personnel Management, which saw the theft of 21 million government employee records.

Though that stolen data was encrypted and there is no evidence it has been misused to date, the Chinese government is likely sitting on that trove, waiting for the day they have a quantum computer powerful enough to crack public key encryption, he said. Organizations that store sensitive data with a long shelf-life need to start preparing now. There is no time to waste.

But what will make a good quantum technologist?

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Herman Collins, CEO of StrategicQC, a recruiting agency for the quantum-computing ecosystem, believes that sourcing quantum-related talent at this stage comes down to credentials. Because advanced quantum expertise is rare, the biggest sign that a candidate is qualified is whether they have a degree in one of the fields of study that relates to quantum computing, he said. I would say that degrees, particularly advanced degrees, such as quantum physics obviously, physics theory, math or computer science are a good start. A focus on machine learning or artificial intelligence would be excellent as part of an augmented dynamic quantum skill set.

Although Google, IBM, and the U.S. government have infinite amounts of money to throw at talent, smaller companies are occasionally posting jobs for quantum-computing talent. Collins thinks that, despite the relative lack of resources, these small companies have at least a few advantages when it comes to attracting the right kind of very highly specialized talent.

Smaller firms and startups can often speak about the ability to do interesting work that will impact generations to come and perhaps some equity participation, he said. Likewise, some applicants may be interested in working with smaller firms to build quantum-related technology from the ground up. Others might prefer a more close-knit team environment that smaller firms may offer.

Some 20 percent of the quantum-related positions, Collins continued, are in marketing, sales, management, tech support, and operations. Even if you havent spent years studying quantum computing, in other words, you can still potentially land a job at a quantum-computing firm, doing all the things necessary to ensure that the overall tech stack keeps operating.

It is equally important for companies in industries where quantum can have impactful results in the nearer term begin to recruit and staff quantum expertise now, Collins said. Companies competing in financial services, aerospace, defense, healthcare, telecommunications, energy, transportation, agriculture and others should recognize the vital importance of looking very closely at quantum and adding some skilled in-house capability.

Given the amount of money and research-hours already invested in quantum computing, as well as some recent (and somewhat controversial) breakthroughs, theres every chance the tech industry could see an uptick in demand for jobs related to quantum computing. Even for those who dont plan on specializing in this esoteric field, there may be opportunities to contribute.

Here is the original post:

Quantum Computing: Will It Actually Produce Jobs? - Dice Insights

Written by admin

March 19th, 2020 at 1:52 pm

Posted in Quantum Computer

Quantum computing is right around the corner, but cooling is a problem. What are the options? – Diginomica

Posted: at 1:52 pm


without comments

(Shutterstock.com)

Why would you be thinking about quantum computing? Yes, it may be two years or more before quantum computing will be widely available, but there are already quite a few organizations that are pressing ahead. I'll get into those use cases, but first - Lets start with the basics:

Classical computers require built-in fans and other ways to dissipate heat, and quantum computers are no different. Instead of working with bits of information that can be either 0 or 1, as in a classical machine, a quantum computer relies on "qubits," which can be in both states simultaneously called a superposition thanks to the quirks of quantum mechanics. Those qubits must be shielded from all external noise, since the slightest interference will destroy the superposition, resulting in calculation errors. Well-isolated qubits heat up quickly, so keeping them cool is a challenge.

The current operating temperature of quantum computers is 0.015 Kelvin or -273C or -460F. That is the only way to slow down the movement of atoms, so a "qubit" can hold a value.

There have been some creative solutions proposed for this problem, such as the nanofridge," which builds a circuit with an energy gap dividing two channels: a superconducting fast lane, where electrons can zip along with zero resistance, and a slow resistive (non-superconducting) lane. Only electrons with sufficient energy to jump across that gap can get to the superconductor highway; the rest are stuck in the slow lane. This has a cooling effect.

Just one problem though: The inventor, MikkoMttnen, is confident enough in the eventual success that he has applied for a patent for the device. However, "Maybe in 10 to 15 years, this might be commercially useful, he said. Its going to take some time, but Im pretty sure well get there."

Ten to fifteen years? It may be two years or more before quantum computing will be widely available, but there are already quite a few organizations that are pressing ahead in the following sectors:

An excellent, detailed report on the quantum computing ecosystem is: The Next Decade in Quantum Computingand How to Play.

But the cooling problem must get sorted. It may be diamonds that finally solve some of the commercial and operational/cost issues in quantum computing: synthetic, also known as lab-grown diamonds.

The first synthetic diamond was grown by GE in 1954. It was an ugly little brown thing. By the '70s, GE and others were growing up to 1-carat off-color diamonds for industrial use. By the '90s, a company called Gemesis (renamed Pure Grown Diamonds) successfully created one-carat flawless diamonds graded ILA, meaning perfect. Today designer diamonds come in all sizes and colors: adding Boron to make them pink or nitrogen to make them yellow.

Diamonds have unique properties. They have high thermal conductivity (meaning they don't melt like silicon). The thermal conductivity of a pure diamond is the highest of any known solid. They are also an excellent electrical insulator. In its center, it has an impurity called an N-V center, where a carbon atom is replaced by a nitrogen atom leaving a gap where an unpaired electron circles the nitrogen gap and can be excited or polarized by a laser. When excited, the electron gives off a single photon leaving it in a reduced energy state. Somehow, and I admit I dont completely understand this, the particle is placed into a quantum superposition. In quantum-speak, that means it can be two things, two values, two places at once, where it has both spin up and spin down. That is the essence of quantum computing, the creation of a "qubit," something that can be both 0 and 1 at the same time.

If that isnt weird enough, there is the issue of entanglement. A microwave pulse can be directed at a pair of qubits, placing them both in the same state. But you can "entangle" them so that they are always in the same state. In other words, if you change the state of one of them, the other also changes, even if great distances separate them, a phenomenon Einstein dubbed, spooky action at a distance. Entangled photons don't need bulky equipment to keep them in their quantum state, and they can transmit quantum information across long distances.

At least in the theory of the predictive nature of entanglement, adding qubits explodes a quantum computer's computing power. In telecommunications, for example, entangled photons that span the traditional telecommunications spectrum have enormous potential for multi-channel quantum communication.

News Flash: Physicists have just demonstrated a 3-particle entanglement. This increases the capacity of quantum computing geometrically.

The cooling of qubits is the stumbling block. Diamonds seem to offer a solution, one that could quantum computing into the mainstream. The impurities in synthetic diamonds can be manipulated, and the state of od qubit can held at room temperature, unlike other potential quantum computing systems, and NV-center qubits (described above) are long-lived. There are still many issues to unravel to make quantum computers feasible, but today, unless you have a refrigerator at home that can operate at near absolute-zero, hang on to that laptop.

But doesnt diamonds in computers sound expensive, flagrant, excessive? It begs the question, What is anything worth? Synthetic diamonds for jewelry are not as expensive as mined gems, but the price one pays at retail s burdened by the effect of monopoly, and so many intermediaries, distributors, jewelry companies, and retailers.

A recent book explored the value of fine things and explains the perceived value which only has a psychological basis.In the 1930s, De Beers, which had a monopoly on the world diamond market and too many for the weak demand, engaged the N. W. Ayers advertising agency realizing that diamonds were only sold to the very rich, while everyone else was buying cars and appliances. They created a market for diamond engagement rings and introduced the idea that a man should spend at least three months salary on a diamond for his betrothed.

And in classic selling of an idea, not a brand, they used their earworm taglines like diamonds are forever. These four iconic words have appeared in every single De Beers advertisement since 1948, and AdAge named it the #1 slogan of the century in 1999. Incidentally, diamonds arent forever. That diamond on your finger is slowly evaporating.

The worldwide outrage over the Blood Diamond scandal is increasing supply and demand for fine jewelry applications of synthetic diamonds. If quantum computers take off, and a diamond-based architecture becomes a standard, it will spawn a synthetic diamond production boom, increasing supply and drastically lowering the cost, making it feasible.

Many thanks to my daughter, Aja Raden, an author, jeweler, and behavioral economist for her insights about the diamond trade.

Here is the original post:

Quantum computing is right around the corner, but cooling is a problem. What are the options? - Diginomica

Written by admin

March 19th, 2020 at 1:52 pm

Posted in Quantum Computer

Quantum Computing for Everyone – The Startup – Medium

Posted: at 1:52 pm


without comments

Qubits are exponentially faster than bits in several computing problems, such as database searches and factoring (which, as we will discuss soon, may break your Internet encryption).

An important thing to realize is that qubits can hold much more information than a bit can. One bit holds the same amount of information as one qubit they can both only hold one value. However, four bits must be used to store the same amount of information as two qubits. A two-qubit system in equal superposition holds values for four states, which on a classical computer, would need at least four bits to hold. Eight bits are needed to store the same amount of information as three qubits, since a three-qubit system can store eight states 000, 001, 010, 011, 100, 101, 110, and 111. This pattern continues.

The below graph provides a visual for the computing power of qubits. The x-axis represents the number of qubits used to hold a certain amount of information. The blue lines y represents the number of bits needed to hold the same amount of information as the number of qubits (x-axis), or 2 to the power of x. The red lines y represents the number of qubits needed to hold the same amount of information as the number of qubits in the x-axis (y=x).

Imagine the exponential speedup quantum computing can provide! A gigabyte (8E+09 bits) worth of information can be represented with log(8E+09)/log(2) = 33 (rounded up from 32.9) qubits.

Quantum computers are also great at factoring numbers which leads us to RSA encryption. The security protocol that secures Medium and probably any other website youve been on is known as RSA encryption. It relies on the fact that with current computing resources, it would take a very, very long time to factor a 30+-digit number m that has only one solution namely, p times q, where both p and q are large prime numbers. However, dividing m by p or q is computationally much easier, and since m divided by q returns p and vice versa, it provides a quick key verification system.

A quantum algorithm called Shors algorithm has shown exponential speedup in factoring numbers, which could one day break RSA encryption. But dont buy into the hype yet as of this writing, the largest number factored by quantum computers is 21 (into 3 and 7). The hardware has not been developed yet for quantum computers to factor 30-digit numbers or even 10-digit numbers. Even if quantum computers one day do break RSA encryption, a new security protocol called BB84 that relies on quantum properties is verified safe from quantum computers.

So will quantum computers ever completely replace the classical PC? Not in the forseeable future.

Quantum computing, while developing very rapidly, is still in an infantile stage, with research only being conducted semi-competitively by large corporations like Google, Microsoft, and IBM. Much of the hardware to accelerate quantum computing is not currently available. There are several obstacles to a quantum future, of which a major one is addressing gate errors and maintaining integrity of a qubits state.

However, given the amount of innovation that has happened in the past few years, it seems inevitable during our lifetimes that quantum computing will make huge strides. In addition, complexity theory has shown that there are several cases where classical computers perform better than quantum computers. IBM quantum computer developers state that quantum computing will probably never completely eliminate classical computers. Instead, in the future we may see a hybrid chip that relies on quantum transistors for certain tasks and classical transistors for others, depending on which one is more appropriate.

Read more:

Quantum Computing for Everyone - The Startup - Medium

Written by admin

March 19th, 2020 at 1:52 pm

Posted in Quantum Computer

Work from home: Improve your security with MFA – We Live Security

Posted: at 1:52 pm


without comments

Remote work can be much safer with the right cyberhygiene practices in place multifactorauthentication is one of them

If you happen to be working from home due to the COVID-19 pandemic, you should beef up your logins with Multi-Factor Authentication (MFA), or sometimes called Two-Factor Authentication (2FA). That way, you dont have to entrust your security to a password alone. Easy to hack, steal, leak, rinse and repeat, passwords have become pass in the security world; its time to dial in your MFA.

That means you have something besides just a password. You may have seen MFA in action when you try to log into your bank and you receive an access code on your smartphone that you must also enter to verify its really you who is logging in. While its an extra step, it becomes exponentially more difficult for bad guys to get access to your account, even if they have a password that was compromised in a breach or otherwise.

The good news is that MFA is no longer super-tough to use. Here, we look at a few different popular ways to use it. If you need to work remotely now and log into a central office to collaborate with co-workers, this is a nice way to beef up the security of those connections.

This means you have something like a key fob, security USB key or the like, which can be used to generate a very secure passcode thats all-but-impossible to break (unless you have a quantum computer handy). Nowadays, things like YubiKey or Thetis are available for less than US$50 and are very widely supported if youre logging into your own corporate office technology, online office applications and a host of other cloud applications. It means your normal login will ask for a password, but also the code generated by your device, which is often physically small enough to get lost in a pants pocket, so some folks hang them on their keychain for safekeeping.

Nowadays you probably carry a mobile device around most of the time, which is a good argument for using it to boost your MFA security stance. For example, you can download an authentication app such as Authy, Google Authenticator, or ESET Secure Authentication. Whatever you choose, make sure it has a solid history, security-wise, since it needs to reside on your smartphone, which we now know can become compromised as well, thereby undermining your other security efforts.

RELATED READING: Work from home: How to set up a VPN

Its worth noting that spam SMS messages on your smartphone can trick some users into voluntarily compromising their own accounts, so stay on the lookout if you use this. Of course, reputable mobile security software can help if youre concerned with security problems on the platform itself.

Its very hard to fake a fingerprint or retinal scan and make sure it offers a solid factor in MFA. Nowadays, lots of devices have built-in biometric readers that can get an image of your face from your smartphone taking your picture, or scan your fingerprint, so its not hard to implement this on a device you probably already have. Some folks steer away due to privacy concerns, which promises to be an ongoing conversation. Also, while you can reset a password, if a provider gets hacked it is notoriously difficult to reset your face (old spy movie plots, anyone?).

The important thing with MFA is that you pick one that suits your goals and one that is easy for you to include in your routine. I have a very good lock on my front door, but its very hard to use, so often my wife catches me leaving it open, which isnt very secure, is it? Good security you dont use cant protect you.

In the event of a breach, MFA can offer side benefits as well. If you are notified that your password is compromised, theres a very good chance they dont also have one of your other factors, so successful hack attacks should drop precipitously if MFA is correctly implemented. Use an MFA solution and enjoy technology more safely.

Read the rest here:

Work from home: Improve your security with MFA - We Live Security

Written by admin

March 19th, 2020 at 1:52 pm

Posted in Quantum Computer

Career navigation Be at the core or be at the edge – The Financial Express BD

Posted: at 1:52 pm


without comments

Radi Shafiq | Published: March 19, 2020 11:02:35

In 2009, for aspiring engineering students, electrical engineering was the best subject to study. By the end of 2014, it seemed to be computer science, now it seems to be data science / statistics. There is no way of telling someone about what is to come in five years. Maybe it is quantum computing, or maybe a new era emphasising mental well-being, maybe biochemistry, or philosophy suddenly takes the centre stage at every endeavour.

Today, the market is shifting in an ever-increasing pace. It is easy to feel lost while navigating a career, looking for the best path to climb the ladder. Young professionals are essentially trying to be good enough to be relevant and even vital in 20-30 years. However, most of the buzz-worthy careers today were not even around 10 years ago, and so how can one be preparing for something 20 years down the line?

Here the author found a framework of thinking very helpful. It can be called "Be at the core or be at the edge" framework of thinking about jobs. Every company has some core functions that are time tested and relatively stable - maybe for some it is manufacturing, for some it is the sales, for others it is field management. These functions have well defined roles, hierarchy, and history to go alongside it. If someone is good at this core work, the job is more secured for him or her with little probability of unpredictable troubles. A clear hierarchy means the career will also have defined progression, although at a predictable pace, with only seniors' moving out or up and company growth ending upcreating new spaces.

On the other hand, there are the functions at the edge of the company. These are new things, maybe a new data section, maybe a digital marketing wing, or a small research team that is yet to make an impact on the work. At the edge there are people who are often keeping a low profile, but being flexible to take initiatives in creative and new directions. They are introducing new programmes, exploring sudden new flow of value or revenue. They can often be deemed unnecessary by more of the core people in the organisation.

However, since this is a time with the maximum pace of change in market landscape, the people at the edge have the best chance of adapting to a new reality and introduce the necessary function that take the company to the next level. This can suddenly make the edge people become the core people - or at least become a vital support function for the core to survive and thrive. Think of the way that Adobe stopped regular software sales in favour of subscription services, or how newspapers more and more emphasise on web version over print, how all the TV shows now work overtime on YouTube clips.

The people who are overstretched into their core function and their way of doing things, can become stiff and slow to look into the new avenues, as looking into anything outside - can understandably feel like a waste of time. Why would anyone need to stop doing what makes the most money and instead dabble into stuff that has no proven market? This thinking binds them away from dynamic learning possibilities. And then sudden changes are brought about by one company, and in the aftermath - the whole market begins to adapt, and quickly changes the old core people's position in the market hierarchy. Suddenly market demands one to learn new tricks to stay relevant in the secure place of years.

Very often though, there is no harm in digging deep into the core of the company. It can be a very safe bet, as most businesses may not change so dramatically.

But, to reduce the risk of suddenly being left irrelevant at the market, it is best that everyone needs to invest a portion of their time working on projects at the edge of their organisation, or at the edge of their skill set -- all throughout their career. This flexibility will keep them in touch with the changing tides, and make sure that they can ride the wave, or at least not be taken by surprise when the change finally comes.

This thinking works at any stage of life, when the author was a student, he did digital art for just fun, but ultimately it helped him land the first three part time jobs, having those skills was a bonus on top of the studies. He had friends whose outside interests into videography while studying computer science ended up shaping some of their whole career. In the author's office, he has seen a colleague's occasional contribution to a new initiative becoming 50 per cent of her duty in a year's time - leading to a promotion and recognition.

So, think again, at the office, are you at the core or at the edge? Why not both? Keep learning. Keep creating.

Radi Shafiq is a development professional and artist. He can be reached at radi.iba@gmail.com

Continued here:

Career navigation Be at the core or be at the edge - The Financial Express BD

Written by admin

March 19th, 2020 at 1:52 pm

Posted in Quantum Computer


Page 16«..10..15161718



matomo tracker