Page 469«..1020..468469470471..480490..»

10 high iron vegetables for vegetarians and vegans – Medical News Today

Posted: May 9, 2021 at 1:50 am


Meat and other animal products are rich sources of iron, which sparks concerns about iron deficiency in people following vegetarian and vegan diets. However, there are several suitable sources of iron for these individuals.

Heme iron, which is more abundant in animal products, is easier for the body to absorb. However, people who follow plant-based diets are no more likely than others to experience iron deficiency, providing they eat a wide variety of foods. However, it is important to note that they may experience iron deficiencies if they are not careful in what they eat.

Keep reading to learn more about 10 vegetables that vegetarians and vegans can eat to meet their iron needs, as well as more information on why iron is important.

A persons daily iron needs vary with age, health, and whether they are pregnant or lactating. Adult males aged 1950 years need 8 milligrams (mg) a day, while females need 18 mg. After the age of 50, most adults require 8 mg of the substance. During pregnancy, a persons iron needs increase to 27 mg daily.

Some vegetables that are high in iron include the below.

This fungal delicacy can be expensive as a main course but can serve as a garnish for salads, sandwiches, and other meals for a more affordable price. It offers 6.94 mg of iron per 200 gram (g) serving.

This thin, green root vegetable is one of the most suitable vegetarian sources of protein. Some people also call it the black oyster plant, serpent root, vipers herb, or vipers grass. Individuals can steam 250 g of black salsify to receive 5.5 mg of iron.

Richer in vital nutrients than more water-dense lettuces, such as romaine, spinach is a suitable choice for salads. It offers 4 mg of iron per 150 g serving. Try mixing it with other leafy greens to boost the iron content of a salad even higher.

This bright, rainbow-hued vegetable makes it suitable for salads. Try mixing it with spinach for a lunch rich in iron, or steam and season it on its own for a quick snack. Cooked Swiss chard offers 3.4 mg of iron per 150 g serving.

A person can eat beet greens as a snack or use them to replace other lettuces in a salad. A 100 g serving offers 1.9 mg of iron.

Add canned tomatoes to a salad for some acidic flavor and an iron boost, or try them on a sandwich. They contain 1.57 mg of iron per serving of half a cup.

People can include this uniquely shaped lettuce in salads. Some also like to steam it and eat it on its own. It contains 2 mg of iron per 100 g serving.

Most people serve green cabbage as a side dish. Try it in a casserole for some extra crunch and added iron it contains 0.94 mg of iron per 200 g serving.

Many people eat Brussels sprouts salted, while others enjoy them cooked with garlic in an air fryer or shredded and raw as part of a salad. After steaming, they offer 2.13 mg of iron per 150 g.

Boiled green peas contain 2.46 mg of iron per cup. They make a suitable snack and also pair well with other vegetables. Peas can also add extra texture to an iron-rich salad with Swiss chard and spinach.

Iron is vital for health because the body needs it to produce hemoglobin, a protein that helps red blood cells transport oxygen. Some of its roles include:

A person not getting enough iron may lead to them experiencing iron deficiency. With this condition, an individual may not have any initial symptoms, but as it progresses, they may develop iron deficiency anemia, which can involve the following symptoms:

In severe cases, iron deficiency can become life threatening. People deficient in iron typically have longer hospital stays, worse outcomes when they get sick, a higher risk of heart health issues, and a higher overall risk of dying.

Pregnant people with iron deficiency have a higher risk of negative outcomes such as preterm labor or having a baby with low birth weight. In children, iron deficiency can lead to neurological problems and developmental delays.

While diet plays a role in iron deficiency, it is not the only factor. A persons risk of the condition depends on their age, health, and other factors. Bleeding is also a major risk factor, such as gastrointestinal bleeding from an ulcer or another digestive issue. Menstruation in females of reproductive age can also contribute to iron deficiency.

This is why it is crucial for doctors to assess potential causes of iron deficiency and not just treat the symptoms. Sometimes, iron deficiency is the first symptom of serious bleeding or ulcers. The condition can also appear in people with certain rare genetic disorders, end stage kidney failure, or congestive heart failure.

Individuals may also have a higher risk of iron deficiency after:

Lead exposure in children can also lead to iron deficiency. Parents whose children are iron deficient should discuss lead exposure testing with a doctor.

Learn more about the health benefits and recommended daily intakes of iron here.

Spinach, Swiss chard, and lambs lettuce are some vegetables that contain high amounts of iron.

Iron deficiency is common across many people, with females of childbearing age having the highest prevalence rate, followed by 9% of young children aged 1236 months.

People with the condition may need to take supplements to restore iron levels to optimal levels. In severe cases, they might need an infusion of iron or blood transfusion.

If a person does not get enough iron, believes they may be iron deficient, or has a history of iron deficiency, they should talk to a doctor about strategies for addressing the problem.

Read the original here:

10 high iron vegetables for vegetarians and vegans - Medical News Today

Written by admin |

May 9th, 2021 at 1:50 am

Posted in Vegan

My vegan girlfriend is forcing me to choose between her and my cat because eating mice is against her… – The Sun

Posted: at 1:50 am


A MAN is being forced to choose between his girlfriend and his cat because she isn't happy about the feline eating mice.

The 22-year-old was offered the ultimatum by his girlfriend of seven months after she claimed the kitty didn't share her vegan values.

1

According to the perplexed chap, his girlfriend is "amazing" and says the pair are "super compatible in a lot of ways."

But after several months of dating, she told him that there was no future if he planned on keeping his cat, Mittens, because it "violates the principles of veganism".

On a Reddit thread, the man wrote: "She is an outspoken vegan, and she made it clear at the start of our relationship that it was important to her that any potential (boyfriend) had similar cruelty-free values.

"Me, already being a pescatarian, had little difficulty transitioning to a fully plant based diet.

"My girlfriend was proud of me for going cruelty free and everything seemed well."

He said that the pair became the "vegan couple" on their college campus.

He explained how his girlfriend wasn't warm towards Mittens, but put it down to not growing up with cats.

Throughout the pandemic, the couple decided to spend quarantine together and started talking about buying an apartment together.

But that was when she "dropped the bombshell" and said that she didn't see a future with him if Mittens was involved.

"She said that she believed owning a cat is unconscionable for vegans because they hunt mice and eat meat, and the very act of owning a pet is a violation of vegan principles."

Stunned by her demands, the man told his girlfriend he was not willing to ditch his furry pal as the animal had no choice but to eat meat.

He told her that he buys reputable brands of cat food to even it out. But this still didn't appease his girlfriend, who he claims, advocates for the extinction of domestic cats.

Concerned he was potentially being unreasonable and not wanting the relationship to end over a difference in beliefs, he asked his vegan pals their thoughts.

They reassured him and told him that it maybe isn't about Mittens but that she just wants out of the relationship.

Comment

Exclusive

Fellow Reddit users were horrified by the ultimatum, one person wrote: "She suggested to give it away? It makes no sense whatsoever, will it consume less meat with a new owner?

"Of course not. It's like boasting about your lack of garbage because you dump it in your neighbour's yard," she added.

Another user wrote: "As a fellow vegan, no, this isn't cool. If she truly cared about animals she wouldn't ask you to rehome your cat!"

See the article here:

My vegan girlfriend is forcing me to choose between her and my cat because eating mice is against her... - The Sun

Written by admin |

May 9th, 2021 at 1:50 am

Posted in Vegan

The Alpha of ‘Go’. What is AlphaGo? | by Christopher Golizio | Apr, 2021 | Medium – Medium

Posted: April 24, 2021 at 1:58 am


Photo by Maximalfocus on Unsplash

The game of Go is believed to be the oldest, continuously played board game in the world. It was invented in China, over 2,500 years ago, and is still wildly popular to this day. A survey taken in 2016 found there to be around 20 million Go players worldwide, though they mostly live in Asia. The game board uses a grid system, and requires two players. Each player places either black or white stones on the board, one after another. If player A surrounds player Bs stone(s), then any surrounded stone is removed from the board, and later factors in to player As score. If a player connects a series of stones, the number of squares within that connected series will be at least a portion of the final score for that player. The player with the most points wins.

Of course that explanation of the game was over-simplified, but the game itself actually appears to be simple as well. This is true, in terms of the rules and the goal of the game, however the sheer number of legal board positions is over two times the total number of atoms in the observable universe (Go: 2.1*10, Atoms: 1*10). The incomprehensible amount of moves alone add an extreme level of complexity to the game. On top of that, it was, for a long time, believed that Go required a certain level of human intuition to excel at the game, however the reigning world champion of Go inherently disagrees with this particular assessment.

It was believed not too long ago that a computer would never be able to beat a high ranking human Go player. Its happened in other similar style games, namely chess. In 1997 a computer developed by IBM named Deep Blue beat Garry Kasparov, the reigning world chess champ, using the standard regulated time. Deep Blue used a brute force approach. This involves searching every possible move of each piece (both sides of the board) before ultimately choosing which move would give it the highest probability of winning. This was more a big win for hardware; AlphaGo is something completely different.

AlphaGo, developed by artificial intelligence research company Deep Mind, is the result of combining machine learning and tree search techniques, specifically the Monte Carlo tree search, along with extensive training. Its training consisted of playing games, and was carried out by both human and computer play. The decision making is executed via a deep neural network, which implements both a value network and a policy network. These two networks guide the hand of which tree branch should be traversed, and which should be ignored due to a low probability of winning. This greatly decreases the time complexity of AlphaGos operations, while also improving itself over time.

After being acquired by Google, and a couple of new versions, AlphaGo was eventually succeeded by AlphaGo Zero. AlphaGo Zero differed from its predecessor by being completely self-taught. All versions of AlphaGo were trained in part by showing it human played games. AlphaGo Zero however, did not use any dataset fed to it by humans. Even though it pursued a goal blindly, AlphaGo Zero was able to learn and improve until it was able to surpass the all versions of the original AlphaGo in a mere 40 days. Eventually AlphaGo Zero was generalized into AlphaZero.

AlphaGo, and the familial programs which succeeded it, were a major breakthrough in the world of AI. Driven of course by the hard work of many developers, and also due to the self-improvement capability, this is far from the ceiling of AlphaGos full potential. AlphaGo Zero and Alpha Zero further this; due to their lack of human-backed training, the probability is high of a completely generalized AI algorithm, which could be applied to many different and diverse situations, and over a period of time, begin to function at a level of which humans are easily outperformed.

Two more fun facts: Along with Go and chess, MuZero, the successor to AlphaZero, is also capable of playing at least 57 different Atari games at a superhuman level. Additionally, the hardware cost of a single unit used for the AlphaGo Zero system was quoted at $25 Million.

Continue reading here:

The Alpha of 'Go'. What is AlphaGo? | by Christopher Golizio | Apr, 2021 | Medium - Medium

Written by admin |

April 24th, 2021 at 1:58 am

Posted in Alphago

Why AI That Teaches Itself to Achieve a Goal Is the Next Big Thing – Harvard Business Review

Posted: at 1:58 am


Whats the difference between the creative power of game-playing AIs and the predictive AIs most companies seem to use? How they learn. The AIs that thrive at games like Go, creating never before seen strategies, use an approach called reinforcement learning a mature machine learning technology thats good at optimizing tasks in which an agent takes a series of actions over time, where each action is informed by the outcome of the previous ones, and where you cant find a right answer the way you can with a prediction. Its a powerful technology, but most companies dont know how or when to apply it. The authors argue that reinforcement learning algorithms are good at automating and optimizing in situations dynamic situations with nuances that would be too hard to describe with formulas and rules.

Tweet

Post

Share

Save

Get PDF

Buy Copies

Print

Lee Sedol, a world-class Go Champion, was flummoxed by the 37th move Deepminds AlphaGo made in the second match of the famous 2016 series. So flummoxed that it took him nearly 15 minutes to formulate a response. The move was strange to other experienced Go players as well, with one commentator suggesting it was a mistake. In fact, it was a canonical example of an artificial intelligence algorithm learning something that seemed to go beyond just pattern recognition in data learning something strategic and even creative. Indeed, beyond just feeding the algorithm past examples of Go champions playing games, Deepmind developers trained AlphaGo by having it play many millions of matches against itself. During these matches, the system had the chance to explore new moves and strategies, and then evaluate if they improved performance. Through all this trial and error, it discovered a way to play the game that surprised even the best players in the world.

If this kind of AI with creative capabilities seems different than the chatbots and predictive models most businesses end up with when they apply machine learning, thats because it is. Instead of machine learning that uses historical data to generate predictions, game-playing systems like AlphaGo use reinforcement learning a mature machine learning technology thats good at optimizing tasks. To do so, an agent takes a series of actions over time, and each action is informed by the outcome of the previous ones. Put simply, it works by trying different approaches and latching onto reinforcing the ones that seem to work better than the others. With enough trials, you can reinforce your way to beating your current best approach and discover a new best way to accomplish your task.

Despite its demonstrated usefulness, however, reinforcement learning is mostly used in academia and niche areas like video games and robotics. Companies such as Netflix, Spotify, and Google have started using it, but most businesses lag behind. Yet opportunities are everywhere. In fact, any time you have to make decisions in sequence what AI practitioners call sequential decision tasks there a chance to deploy reinforcement learning.

Consider the many real-world problems that require deciding how to act over time, where there is something to maximize (or minimize), and where youre never explicitly given the correct solution. For example:

If youre a company leader, there are likely many processes youd like to automate or optimize, but that are too dynamic or have too many exceptions and edge cases, to program into software. Through trial and error, reinforcement learning algorithms can learn to solve even the most dynamic optimization problems opening up new avenues for automation and personalization in quickly changing environments.

Many businesses think of machine learning systems as prediction machines and apply algorithms to forecast things like cash flow or customer attrition based on data such as transaction patterns or website analytics behavior. These systems tend to use whats called supervised machine learning. With supervised learning, you typically make a prediction: the stock will likely go up by four points in the next six hours. Then, after you make that prediction, youre given the actual answer: the stock actually went up by three points. The system learns by updating its mapping between input data like past prices of the same stock and perhaps of other equities and indicators and output prediction to better match the actual answer, which is called the ground truth.

With reinforcement learning, however, theres no correct answer to learn from. Reinforcement learning systems produce actions, not predictions theyll suggest the action most likely to maximize (or minimize) a metric. You can only observe how well you did on a particular task and whether it was done faster or more efficiently than before. Because these systems learn through trial and error, they work best when they can rapidly try an action (or sequence of actions) and get feedback a stock market algorithm that takes hundreds of actions per day is a good use case; optimizing customer lifetime value over the course of five years, with only irregular interaction points, is not. Significantly, because of how they learn, they dont need mountains of historical data theyll experiment and create their own data along the way.

They can therefore be used to automate a process, like placing items into a shipping container with a robotic arm; or to optimize a process, like deciding when and through what channel to contact a client who missed a payment, with the highest recouped revenue and lowest expended effort. In either case, designing the inputs, actions, and rewards the system uses is the key it will optimize exactly what you encode it to optimize and doesnt do well with any ambiguity.

Googles use of reinforcement learning to help cool its data centers is a good example of how this technology can be applied. Servers in data centers generate a lot of heat, especially when theyre in close proximity to one another, and overheating can lead to IT performance issues or equipment damage. In this use case, the input data is various measurements about the environment, like air pressure and temperature. The actions are fan speed (which controls air flow) and valve opening (the amount of water used) in air-handling units. The system includes some rules to follow safe operating guidelines, and it sequences how air flows through the center to keep the temperature at a specified level while minimizing energy usage. The physical dynamics of a data center environment are complex and constantly changing; a shift in the weather impacts temperature and humidity, and each physical location often has a unique architecture and set up. Reinforcement learning algorithms are able to pick up on nuances that would be too hard to describe with formulas and rules.

Here at Borealis AI, we partnered with Royal Bank of Canadas Capital Markets business to develop a reinforcement learning-based trade execution system called Aiden. Aidens objective is to execute a customers stock order (to buy or sell a certain number of shares) within a specified time window, seeking prices that minimize loss relative to a specified benchmark. This becomes a sequential decision task because of the detrimental market impact of buying or selling too many shares at once: the task is to sequence actions throughout the day to minimize price impact.

The stock market is dynamic and the performance traditional algorithms (the rules-based algorithms traders have used for years) can vary when todays market conditions differ from yesterdays. We felt this was a good reinforcement learning opportunity it had the right balance between clarity and dynamic complexity. We could clearly enumerate the different actions Aiden could take, and the reward we wanted to optimize (minimize the difference between the prices Aiden achieved and the market volume-weighted average price benchmark). The stock market moves fast and generates a lot of data, giving the algorithm quick iterations to learn.

We let the algorithm do just that through countless simulations before launching the system live to the market. Ultimately, Aiden proved able to perform well during some of the more volatile market periods during the beginning of the Covid-19 pandemic conditions that are particularly tough for predictive AIs. It was able to adapt to the changing environment, while continuing to stay close to its benchmark target.

How can you tell if youre overlooking a problem that reinforcement learning might be able to fix? Heres where to start:

Create an inventory of business processes that involve a sequence of steps and clearly state what you want to maximize or minimize. Focus on processes with dense, frequent actions and opportunities for feedback and avoid processes with infrequent actions and where its difficult to observe which worked best to collect feedback. Getting the objective right will likely require iteration.

Dont start with reinforcement learning if you can tackle a problem with other machine learning or optimization techniques. Reinforcement learning is helpful when you lack sufficient historical data to train an algorithm. You need to explore options (and create data along the way).

If you do want to move ahead, domain experts should closely collaborate with technical teams to help design the inputs, actions, and rewards. For inputs, seek the smallest set of information you could use to make a good decision. For actions, ask how much flexibility you want to give the system; start simple and later expand the range of actions. For rewards, think carefully about the outcomes and be careful to avoid falling into the traps of considering one variable in isolation or opting for short-term gains with long-term pains.

Will the possible gains justify the costs for development? Many companies need to make digital transformation investments to have the systems and dense, data-generating business processes in place to really make reinforcement learning systems useful. To answer whether the investment will pay off, technical teams should take stock of computational resources to ensure you have the compute power required to support trials and allow the system to explore and identify the optimal sequence. (They may want to create a simulation environment to test the algorithm before releasing it live.) On the software front, if youre planning to use a learning system for customer engagement, you need to have a system that can support A/B testing. This is critical to the learning process, as the algorithm needs to explore different options before it can latch onto which one works best. Finally, if your technology stack can only release features universally, you need likely to upgrade before you start optimizing.

And last but not least, as with many learning algorithms, you have to be open to errors early on while the system learns. It wont find the optimal path from day one, but it will get there in time and potentially find surprising, creative solutions beyond human imagination when it does.

While reinforcement learning is a mature technology, its only now starting to be applied in business settings. The technology shines when used to automate or optimize business processes that generate dense data, and where there could be unanticipated changes you couldnt capture with formulas or rules. If you can spot an opportunity, and either lean on an in-house technical team or partner with experts in the space, theres a window to apply this technology to outpace your competition.

Read more:

Why AI That Teaches Itself to Achieve a Goal Is the Next Big Thing - Harvard Business Review

Written by admin |

April 24th, 2021 at 1:58 am

Posted in Alphago

The 13 Best Deep Learning Courses and Online Training for 2021 – Solutions Review

Posted: at 1:58 am


The editors at Solutions Review have compiled this list of the best deep learning courses and online training to consider.

Deep learning is a class of machine learning algorithms that uses multiple layers to progressively extract higher-level features from the raw input. Based on artificial neural networks and representation learning, deep learning can be supervised, semi-supervised or unsupervised. Deep learning models are commonly based on convolutional neural networks but can also include propositional f formulas or latent variables organized by layer.

With this in mind, weve compiled this list of the best deep learning courses and online training to consider if youre looking to grow your neural network and machine learning skills for work or play. This is not an exhaustive list, but one that features the best deep learning courses and online training from trusted online platforms. We made sure to mention and link to related courses on each platform that may be worth exploring as well. Click Go to training to learn more and register.

Platform: Coursera

Description: In the first course of the Deep Learning Specialization, you will study the foundational concept of neural networks and deep learning. By the end, you will be familiar with the significant technological trends driving the rise of deep learning; build, train, and apply fully connected deep neural networks; implement efficient (vectorized) neural networks; identify key parameters in a neural networks architecture, and apply deep learning to your own applications.

Related paths/tracks: Introduction to Deep Learning, Applied AI with DeepLearning, Introduction to Deep Learning & Neural Networks with Keras, An Introduction to Practical Deep Learning, Building Deep Learning Models with TensorFlow

Platform: Codecademy

Description: Deep learning is a cutting-edge form of machine learning inspired by the architecture of the human brain, but it doesnt have to be intimidating. With TensorFlow, coupled with the Keras API and Python, its easy to train, test, and tune deep learning models without knowing advanced math. To start thisPath, sign up for Codecademy Pro.

Platform: DataCamp

Description: Deep learning is the machine learning technique behind the most exciting capabilities in diverse areas like robotics, natural language processing, image recognition, and artificial intelligence, including the famous AlphaGo. In this course, youll gain hands-on, practical knowledge of how to use deep learning with Keras 2.0, the latest version of a cutting-edge library for deep learning in Python.

Related paths/tracks: Introduction to Deep Learning with PyTorch, Introduction to Deep Learning with Keras, Advanced Deep Learning with Keras

Platform: DataCamp

Description: Deep Learning Training with TensorFlow Certification by Edureka is curated with the help of experienced industry professionals as per the latest requirements and demands. This deep learning certification course will help you master popular algorithms like CNN, RCNN, RNN, LSTM, and RBM using the latest TensorFlow 2.0 package in Python. In this deep learning training, you will be working on various real-time projects like Emotion and Gender Detection, Auto Image Captioning using CNN and LSTM, and many more.

Related path/track: Reinforcement Learning

Platform: edX

Description: This 3-credit-hour, 16-week course covers the fundamentals of deep learning. Students will gain a principled understanding of the motivation, justification, and design considerations of the deep neural network approach to machine learning and will complete hands-on projects using TensorFlow and Keras.

Related paths/tracks: Deep Learning Fundamentals with Keras, Deep Learning with Python and PyTorch, Deep Learning with Tensorflow, Using GPUs to Scale and Speed-up Deep Learning, Deep Learning and Neural Networks for Financial Engineering, Machine Learning with Python: from Linear Models to Deep Learning

Platform: Intellipaat

Description: Intellipaats Online Reinforcement Learning course is designed by industry experts to assist you in learning and gaining expertise in reinforcement learning which is one of the core areas of machine learning. In this training, you will be educated on the concepts of machine learning fundamentals, reinforcement learning fundamentals, dynamic programming, temporal difference learning methods, policy gradient methods, Markov Decision, and Deep Q Learning. This Reinforcement Learning certification course will enable you to learn how to make decisions in uncertain circumstances.

Platform: LinkedIn Learning

Description: In this course, learn how to build a deep neural network that can recognize objects in photographs. Find out how to adjust state-of-the-art deep neural networks to recognize new objects, without the need to retrain the network. Explore cloud-based image recognition APIs that you can use as an alternative to building your own systems. Learn the steps involved to start building and deploying your own image recognition system.

Related paths/tracks: Neural Networks and Convolutional Neural Networks Essential Training, Building and Deploying Deep Learning Applications with TensorFlow, PyTorch Essential Training: Deep Learning, Introduction to Deep Learning with OpenCV, Deep Learning: Face Recognition

Platform: Mindmajix

Description: Mindmajix Deep learning with Python Training helps you in mastering various features of debugging concepts, introduction to software programmers, language abilities and capacities, modification of module and pattern designing, and various OS and compatibility approaches. This course also provides training on how to optimize a simple model in Pure Theano, convolutional and pooling layers, and reducing overfitting with dropout regularization. Enroll and get certified now.

Related path/track: AI & Deep Learning with TensorFlow Training

Platform: Pluralsight

Description: In this course, Deep Learning: The Big Picture, you will first learn about the creation of deep neural networks with tools like TensorFlow and the Microsoft Cognitive Toolkit. Next, youll touch on how they are trained, by example, using data. Finally, you will be provided with a high-level understanding of the key concepts, vocabulary, and technology of deep learning. By the end of this course, youll understand what deep learning is, why its important, and how it will impact you, your business, and our world.

Related paths/tracks: Deep Learning with Keras, Building Deep Learning Models Using PyTorch, Deep Learning Using TensorFlow and Apache MXNet on Amazon Sagemaker

Platform: Simplilearn

Description: In this deep learning course with Keras and TensorFlow certification training, you will become familiar with the language and fundamental concepts of artificial neural networks, PyTorch, autoencoders, and more. Upon completion, you will be able to build deep learning models, interpret results, and build your own deep learning project.

Platform: Skillshare

Description: Its hard to imagine a hotter technology thandeep learning,artificial intelligence, andartificial neural networks. If youve got somePython experience under your belt, this course will de-mystify this exciting field with all the major topics you need to know. A few hours is all it takes to get up to speed and learn what all the hype is about. If youre afraid of AI, the best way to dispel that fear is by understanding how it really works and thats what this course delivers.

Related paths/tracks: Ultimate Neural Network and Deep Learning Masterclass, Deep Learning and AI with Python

Platform: Udacity

Description: Become an expert in neural networks, and learn to implement them using the deep learning framework PyTorch. Build convolutional networks for image recognition, recurrent networks for sequence generation, generative adversarial networks for image generation, and learn how to deploy models accessible from a website.

Related path/track: Become a Deep Reinforcement Learning Expert

Platform: Udemy

Description: Artificial intelligence is growing exponentially. There is no doubt about that. Self-driving cars are clocking up millions of miles, IBM Watson is diagnosing patients better than armies of doctors, and Google Deepminds AlphaGo beat the World champion at Go a game where intuition plays a key role. But the further AI advances, the more complex become the problems it needs to solve. And only deep learning can solve such complex problems and thats why its at the heart of artificial intelligence.

Related paths/tracks: Machine Learning, Data Science and Deep Learning with Python, Deep Learning Prerequisites: The Numpy Stack in Python, Complete Guide to TensorFlow for Deep Learning with Python, Data Science: Deep Learning and Neural Networks in Python, Tensorflow 2.0: Deep Learning and Artificial Intelligence, Complete Tensorflow 2 and Keras Deep Learning Bootcamp, Deep Learning Prerequisites: Linear Regression in Python, Natural Language Processing with Deep Learning in Python, Deep Learning: Convolutional Neural Networks in Python, Deep Learning: Recurrent Neural Networks in Python, Deep Learning and Computer Vision A-Z, Deep Learning Prerequisites: Logic Regression in Python

Tim is Solutions Review's Editorial Director and leads coverage on big data, business intelligence, and data analytics. A 2017 and 2018 Most Influential Business Journalist and 2021 "Who's Who" in data management and data integration, Tim is a recognized influencer and thought leader in enterprise business software. Reach him via tking at solutionsreview dot com.

See more here:

The 13 Best Deep Learning Courses and Online Training for 2021 - Solutions Review

Written by admin |

April 24th, 2021 at 1:58 am

Posted in Alphago

How AI is being used for COVID-19 vaccine creation and distribution – TechRepublic

Posted: at 1:58 am


Artificial intelligence is being used in a variety of ways by those trying to address variants and for data management.

Image: iStock.com/Udom Pinyo

Millions of people across the world have already started the process of receiving a COVID-19 vaccines. More than half of all adults in the U.S. have gotten at least one dose of a COVID-19 vaccine while state and local officials seek to get even more people vaccinated as quickly as possible. Some health experts have said artificial intelligence will be integral not just in managing the process of creating boosters for the variants to COVID-19 but also for the distribution of the vaccine.

David Smith, associate VP of virtual medicine at UMass Memorial Health Care, explained that the difference between predictive modeling and AI, or machine learning, is that predictive models depend on past data to foretell future events.

AI, on the other hand, not only uses historical data, it makes assumptions about the data without applying a defined set of rules, Smith said.

"This allows the software to learn and adapt to information patterns in more real time. The AI utility for vaccine distribution could be applied in a variety of ways from understanding which populations to target to curve the pandemic sooner, adjusting supply chain and distribution logistics to ensure the most people get vaccinated in the least amount of time, to tracking adverse reactions and side effects," he noted.

SEE:AI in healthcare: An insider's guide (free PDF)(TechRepublic Premium)

Matthew Putman, an expert in artificial intelligence and CEO of Nanotronics, has been working with one of the top vaccine developers and said AI was helping teams manage the deluge of data that comes with a project like this.

While the number of vaccinated people in the country continues to rise by the millions each day, there is already some concern about how the vaccines will hold up against the multitude of variants.

The biggest challenge right now and the biggest opportunity for changing the way that therapeutics are both developed and deployed, Putman explained, is being able to handle new types of variants.

"In the case of mRNA vaccines, being able to actually do reprogramming as fast as possible in a way that is as coordinated as possible. The things that we have realized in many parts of our lives now is that as good as humans are at exploring things and being creative, being able to deal with enough data and to be able to make intelligent choices about that data is something that actually artificial intelligence agents can do at a pace that is required to keep up with this," Putman said.

"So it means a lot of multivariate correlations to different parts of the process. It means being able to detect potential intrusion and it's a way that we can avoid these lengthy phase three trials. Everything that's going on right now is so incredibly urgent."

Putman added that an AI system would help with building actionable data sets that allow doctors to examine root causes or things that researchers don't have time to spend on.

When researchers are dealing with things like lipid nanoparticles and the tasks of imaging and classifying different features and trends that are on a scale, it can be difficult for humans to manage. AI is now being used for analyzing these images in real time and has helped researchers try to pick out genetic mutations and variations, according to Putman.

"People are more open to AI than ever, and this emergency has brought a focus on things that probably would have been on the backburner. But AI is starting to be used for classification and to understand what genomic features and what type of nano compounding has been going on," Putman added.

"AI has been used for the development of components and much more. It's been crucial to the process and will be crucial to an alteration to the vaccine, which is looking like it will have to be done at some point. The way I look at contemporary AI systems, it's taking into account what move is being made next. This is Alpha Go for drug discovery. A virus will mutate in different ways and now a response to that can be developed in new ways."

Putman went on to compare the situation to the yearly creation of a new flu vaccine, noting that once you've grown a lot of biological specimens, it's a slow tedious process to change for new mutations.

"Using mRNA, it's not, and using AI for being able to see what changes are going on from everywhere from the sequence to the quality inspection is a big deal," Putman said.

When asked about the production of boosters for variants, Putman said adjusting a process usually takes years just for a normal product, and if you're doing something as new as what is going on with the vaccine and you're dealing with the entirety of the supply chain, the process has to be adjusted as fast as the science does.

"We have the science now. We've shown that these types of vaccines can be developed. Now, making sure that your production process stays the same, even if you've adjusted something else, is something that if it's put in place, the process will adjust," Putman said.

"If an AI system worked for this or an intelligent factory system is put into place, then the process can adjust as quickly as the R&D can. Without AI, it would be very difficult."

Cheryl Rodenfels, a healthcare strategist at Nutanix, echoed those remarks, explaining that AI can be an incredibly useful tool when it comes to vaccine distribution.

Organizations that utilize workflow improvement processes can harness AI tools to ensure that the processes are being followed as designed and that any missing elements are identified, Rodenfels said, adding that this process plays into vaccine tracking measures specifically, as AI will track vaccine handling, storage and administration.

"Relying on the technology to manage distribution data eliminates human error, and ensures that healthcare organizations are accurately tracking the vast amounts of data associated with the vaccine rollout," Rodenfels said.

"However, the biggest problem with using AI to assist with vaccine rollout is that each manufacturer has its own process and procedure for the handling, storage, tracking, training and administration of the vaccine. This is then complicated by the amount of manufacturers in the market. Another issue is that hospital pharmacies and labs don't have a lot of extra space to stage and set up the doses. In order to insert effective AI, a hospital would need to ensure a process architect and a data scientist work collaboratively."

These issues are compounded by the fact that there is no baseline for how these things are supposed to work, she noted. The measurements, analytics and information will be developed on the fly, and because it is unknown how many vaccines each organization will be required or allowed to have, it is difficult to predict the capacity or amount of data that will be produced.

The advantage to using AI in vaccine rollout is that it will set us up for success during round two of vaccine dosing. It will also positively impact future vaccine dissemination by creating a blueprint for the next mass inoculation need, both Rodenfels and Putman said.

Walter McAdams, SQA Group senior vice president of solutions delivery, said that AI will be useful in analyzing how the virus is mutating over time, how variations could affect current vaccine make-ups, and how to use that information to accelerate the development of virus countermeasures.

Researchers, he said, can leverage data about how COVID-19 has mutated and vaccine effectiveness to continuously refine the vaccine sequence and, in some cases, get ahead of COVID-19 and prepare new vaccines before additional strains fully develop.

Our editors highlight the TechRepublic articles, downloads, and galleries that you cannot miss to stay current on the latest IT news, innovations, and tips. Fridays

Link:

How AI is being used for COVID-19 vaccine creation and distribution - TechRepublic

Written by admin |

April 24th, 2021 at 1:58 am

Posted in Alphago

Digitization in the energy industry – the machine learning revolution – Lexology

Posted: at 1:57 am


In researching for this blog, I reached out to Brendan Bennett, a Reinforcement Learning Researcher at the University of Alberta, for his thoughts on how emerging digital technologies may be deployed in the energy industry. Brendan and I discussed how some recent landmark accomplishments in artificial intelligence might soon make their way into the energy industry.

Digital innovation in commercial spheres has largely been a story of improving efficiency and reliability while reducing costs. In the energy sector, these innovations have been a result of oil and gas companies doing what they do best: relying on talented engineers to improve on existing solutions. Improvements have quickly spread across the industry, bringing down costs and making processes more efficient.

I recently co-authored an article on the future of Artificial Intelligence in the Canadian Oil Patch, which discusses a number of examples of current innovations, including AI-powered predictive maintenance, optimized worker safety, and digital twin technology for better visualization of construction projects and formations. Looking forward, network effects, improving sensors, and algorithmic advances will continue to increase the rate of innovation and prevalence of new tech in the energy industry.

The most common example of network effects can likely be found in your pocket or in your hand right now. Because of the network effects of the smartphone, every new smartphone purchase increases the value of everyone else's smartphones by a little bit. Coupled with economies of scale in production, this means that the cost of these devices falls, while the value they provide increases. Some may view this as a virtuous cycle.

This same effect can be seen with sensors deployed in the oil and gas sector. Advances in technology and widespread use are pushing down the cost of sensors. This allows for more sensors to be deployed in a given application, creating a more complete and reliable data set when all measurements are taken together. Algorithms trained on larger, more comprehensive data sets can produce leaps in efficiency that were previously impossible.

DeepMind, an artificial intelligence research laboratory with a research office in Edmonton, recently combined prolific sensors with its own machine learning capabilities to reduce the cooling bill at Google's data centres by up to 40%. Cooling is one of the primary uses of energy in a data centre; the servers running services like Gmail and YouTube generate a massive amount of heat. Given that Google already runs some of the most sophisticated energy management technology in the world at its data centres, an energy savings of almost half is astounding.

The same combination of plentiful sensors and advanced machine learning will soon be applied throughout the energy value chain, and promises to deliver those same astounding results. Accurate sensors providing clear insight into power use relative to a variety of factors will soon allow power grids run by machine learning algorithms to more accurately predict periods of peak demand, and provide the energy to satisfy demand with dramatic efficiency. These systems could also be designed to optimize for multiple variables, providing low cost power while also minimizing CO2 emissions.

More abstractly, AlphaFold, another project from DeepMind, employed deep neural networks to model protein folding, providing a solution to a 50-year-old grand challenge in biology. The protein-folding problem has baffled biologists for decades. Cyril Levinthal, an eminent biologist, estimated in 1969 that it would take longer than the age of the known universe to describe all of the possible configurations of a typical protein through brute force calculation, an estimated 10300 possible configurations. AlphaFold's deep neural network can predict the configuration of a protein with stunning accuracy, in less time than standard complex experimental methods.

A similar approach might be applied to the problems of resource extraction and mapping of geological formations. Feeding the neural net with massive amounts of information generated from sensors that are cheaper and more plentiful in the oil and gas industry may lead to improvements in production efficiency. Further, the ability to map and test within the digital playground of these advanced neural nets may help producers avoid undesired consequences to human health and to the environment.

These advanced AI technologies will fundamentally change the way we explore for and develop our natural resources. Organizations like Avatar Innovations, which work with some of the province's leading entrepreneurs to bring innovations into the energy space, will be pivotal in helping Alberta lead the way in the development of these technologies.

Read more:

Digitization in the energy industry - the machine learning revolution - Lexology

Written by admin |

April 24th, 2021 at 1:57 am

Posted in Machine Learning

A Guide To Machine Learning: Everything You Need To Know – Analytics Insight

Posted: at 1:57 am


Artificial Intelligence and other disruptive technology are spreading their wings in the current scenario. Technology has become a mandatory element for all kinds of businesses across all industries around the globe. Let us travel back to 1958 when Frank Rosenblatt created the first artificial neural network that could recognize patterns and shapes. From such a primitive stage we have now reached a place where machine learning is an integral part of almost all softwares and applications.

Machine learning is resonating with everything now, be it automated cars, speech recognition, chatbots, smart cities, and whatnot. The abundance of big data and the significance of data analytics and predictive analytics has made machine learning an imperative technology.

Machine learning, as the name suggests is a process in which machines learn and analyze the data fed to it and predict the outcome. There are different types of machine learning like supervised, unsupervised, semi-supervised, etc. Machine learning is the stairway to reach artificial intelligence and it learns from algorithms based on the database and derives answers and correlations from them.

Machine learning is an integral part of automation and digital transformation. In 2016, Google introduced its graph-based machine learning tool. It used the semi-supervised learning method to connect clusters of data based on their similarities. Machine learning technology helps industries identify market trends, potential risks, customer needs, and business insights. Today, business intelligence and automation are the norms and ML is the foundation to achieve these and enhance the efficiency of your business.

A term identified by Gartner, Hyperautomation is the new tech trend in the world. It enables industries to automate all possible operations and gain intelligent and real-time insights from the data collected. ML, AI, and RPA are some of the important technologies behind the acceleration of hyperautomation. AIs ability to augment human behaviour is aided by machine learning. Machine learning algorithms can automate various tasks once the algorithm is trained. ML models along with AI will enhance the capacity of machines and software to automatically improve and respond to changes according to the business requirements.

According to Industry Research, the Global Machine Learning market is projected to grow by USD11.16 billion between 2020 and 2024, progressing at a CAGR of 39% during the forecast period.

This data is enough to indicate the growth and acceptance of ML across the world. Let us understand how different industries are using ML.

Other industries leveraging ML include banking and finance, cybersecurity, manufacturing, media, automobile, and many more.

Executives and C-Suite professionals should consider it a norm to have a strategy or goal before putting out ML into practice. The true capability of this technology can only be extracted by developing a strategy for its use. Otherwise, the disruptive tech might remain inside closed doors just automating routine and mundane tasks. MLs capability to innovate should not be chained just to automate repetitive tasks.

According to McKinsey, companies should consist two types of people, quants, and translators to unleash the power of ML. Translators should be the ones connecting the vague lines between the complex data analysis by algorithms and convert it into readable and understandable business insights for the executives.

Machine learning is not an unfamiliar technology these days, but it still takes time and patience to leave the legacy systems behind and embrace the power of disruptive technologies. Companies should focus on democratizing ML and data analytics for their employees and create a transparent ecosystem to leverage the capabilities of these techs by demystifying them.

The rest is here:

A Guide To Machine Learning: Everything You Need To Know - Analytics Insight

Written by admin |

April 24th, 2021 at 1:57 am

Posted in Machine Learning

Facebook and the Power of Big Data and Greedy Algorithms – insideBIGDATA

Posted: at 1:57 am


Is Facebook evil?

The answer to this simple question is not that simple. The tools that have enabled Facebook to enjoy its position are its access to massive amounts of data and its machine learning algorithms. And it is in these two areas that we need to explore if there is any wrongdoing on Facebooks part.

Facebook, no doubt, is a giant in online space. Despite their arguments that they are not a monopoly, many think otherwise. The role that Facebook plays in our lives, specifically in our democracy, has been heavily scrutinized and debated over the last few years, with the lawsuits brought on by the federal and dozens of state governments toward the end of 2020 being the latest examples. While many regulators and most regular folks will argue that Facebook exerts unparalleled power over who shares what and how ordinary people get influenced by information and misinformation, many still dont quite understand where the problem really lies. Is it in the fact that Facebook is a monopoly? Is it that Facebook willingly takes ideological sides? Or is it in Facebooks grip on small businesses and its massive user base through data sharing and user tracking? Its all of these and more. Specifically, its Facebooks access to large data through its connected services and the algorithms that process this data in a very profit-focused way to turn up user engagement and revenue.

Most people understand that there are algorithms that drive systems such as Facebook. But their view about such algorithms is quite simplisticthat is, an algorithm is a set of rules and step-by-step instructions that informs a system how to act or behave. In reality, hardly any critical aspect of todays computational systems, least of them Facebooks, are driven by such algorithms. Instead, they use machine learning, which by one definition means computers writing their own algorithms. Okay, but at least were controlling the computers, right? Not really.

The whole point about machine learning is that we, the humans, dont have enough time, power, or ability to churn through massive amounts of data to look for relevant patterns and make decisions in real time. Instead, these machine learning algorithms do that for us. But how can we tell if they are doing what we want them to do? This is where the biggest problem comes. Most of these algorithms optimize their learning based on metrics such as user engagement. More user engagement leads to more usage of the system, which in turn drives up ad revenue and other business metrics. On the user side, higher engagement leads to even more engagementlike an addiction. On the business side, it leads to more and richer data that Facebook can sell to vendors and partners.

Facebook can use their passivity in this process to argue that they are not evil. After all, they dont manually or purposefully discriminate against anyone, and they dont intentionally plant misinformation in users feeds. But they dont need to. Facebook holds a mirror on our society and amplifies our bad instincts because of how their machine learning-powered algorithms learn and optimize for user engagement outcomes. Unfortunately, since controversy and misinformation tends to attract high user engagement, the algorithms will automatically prioritize such posts because they are designed to maximize engagement.

A user is worth hundreds of dollars to Facebook, depending on how active they are on the platform. A user that is on multiple platforms that Facebook owns is worth a lot more. Facebook can claim that keeping these platforms connected is best for the users and the businesses and that may be the case to some extent, but the one entity that has most to gain by this is Facebook.

There are reasonable alternatives to WhatsApp and Instagram, but none for Facebook. And it is that flagship service and monopoly of Facebook that makes even those other apps a lot more compelling and much harder to leave for their users. Breaking up these three services will create good competition, and drive up innovation and value for the users. But it will also make it harder for Facebook to leverage its massive user base for the kind of data they currently collect (and sell) and the machine learning algorithms they could run. There is a reason Facebook has doubled its lobbying spending in the last five years. Facebook is also trying to fight Apples stand on informing its users about user tracking with an argument that giving the users a choice about tracking them or not will hurt small businesses. Even Facebooks own employees dont buy that argument.

I may be singling out Facebook here, but many of the same arguments can be made against Google and other monopolies. We see the same kind of pattern. It starts out by gaining users, giving them free services, then bringing in ads. Nothing wrong with ads; televisions and radio have done them for decades. But with the way the digital ad market works, and the way these services train their machine learning algorithms, its easy for them to go after data at any cost (such as user privacy). More data, more learning, more user engagement, more sales for ads and user data, and the cycle continues. At some point the algorithms take on a life of their own, disconnected from whats good or right for the users. Some of these algorithms goals may align with the users and businesses, but in the end, it is the job of these algorithms to increase the bottom line for their mastersin this case, Facebook.

To counteract this, we need more than just regulations. We also need education and awareness. Every time we post, click, or like something on these platforms, we are giving a vote. Can we exercise some discipline in this voting process? Can we inform ourselves before we vote? Can we think about a change? In the end, this isnt just about free markets; its about free will.

About the Author

Dr. Chirag Shah, associate professorin the Information Schoolat the University of Washington.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:@InsideBigData1 https://twitter.com/InsideBigData1

Go here to see the original:

Facebook and the Power of Big Data and Greedy Algorithms - insideBIGDATA

Written by admin |

April 24th, 2021 at 1:57 am

Posted in Machine Learning

Will Quantum Computing Ever Live Up to Its Hype? – Scientific American

Posted: at 1:56 am


Quantum computers have been on my mind a lot lately. A friend who likes investing in tech, and who knows about my attempt to learn quantum mechanics, has been sending me articles on how quantum computers might help solve some of the biggest and most complex challenges we face as humans, as a Forbes commentator declared recently. My friend asks, What do you think, Mr. Science Writer? Are quantum computers really the next big thing?

Ive also had exchanges with two quantum-computing experts with distinct perspectives on the technologys prospects. One is computer scientist Scott Aaronson, who has, as I once put it, one of the highest intelligence/pretension ratios Ive ever encountered. Not to embarrass him further, but I see Aaronson as the conscience of quantum computing, someone who helps keep the field honest.

The other expert is physicist Terry Rudolph. He is a co-author, the R, of the PBR theorem, which, along with its better-known predecessor, Bells theorem, lays bare the peculiarities of quantum behavior. In 2011 Nature described the PBR Theorem as the most important general theorem relating to the foundations of quantum mechanics since Bells theorem was published in 1964. Rudolph is also the author of Q Is for Quantum and co-founder of the quantum-computing startup PsiQuantum. Aaronson and Rudolph are on friendly terms; they co-authored a paper in 2007, and Rudolph wrote about Q Is for Quantum on Aaronsons blog. In this column, Ill summarize their views and try to reach a coherent conclusion.

First, a little background. Quantum computers exploit superposition (a particle inhabits two or more mutually exclusive states at the same time) and entanglement (a special form of superposition, in which two or more particles influence each other in spooky ways) to do things that ordinary computers cant. A bit, the basic unit of information of a conventional computer, can be in one of two states, representing a one or zero. Quantum computers, in contrast, traffic in qubits, which are constructed out of superposed particles that embody numerous states simultaneously.

For decades, quantum computing has been little more than a hypothesis, or laboratory curiosity, as researchers wrestled with the technical complexities of maintaining superposition and entanglement for long enough to perform useful calculations. (Remember that as soon as you look at an electron or cat, its superposition vanishes.) Now, tech giants like IBM, Amazon, Microsoft and Google have invested in quantum computing, as have many smaller companies, 193 by one count. In March, the startup IonQ announced a $2 billion deal that would make it the first publicly traded firm dedicated to quantum computers.

The Wall Street Journal reports that IonQ plans to produce a device roughly the size of an Xbox videogame console by 2023. Quantum computing, the Journal states, could speed up calculations related to finance, drug and materials discovery, artificial intelligence and others, andcrack many of the defensesused to secure the internet. According to Business Insider, quantum machines could help us cure cancer, and even take steps to reverse climate change.

This is the sort of hype that bugs Scott Aaronson. He became a computer scientist because he believes in the potential of quantum computing and wants to help develop it. Hed love to see someone build a machine that proves the naysayers wrong. But he worries that researchers are making promises they cant keep. Last month, Aaronson fretted on his blog Shtetl-Optimized that the hype, which he has been countering for years, has gotten especially egregious lately.

Whats new, Aaronson wrote, is that millions of dollars are now potentially available to quantum computing researchers, along with equity, stock options, and whatever else causes ka-ching sound effects and bulging eyes with dollar signs. And in many cases, to have a shot at such riches, all an expert needs to do is profess optimism that quantum computing will have revolutionary, world-changing applications and have themsoon. Or at least, not object too strongly when others say that. Aaronson elaborated on his concerns in a two-hour discussion on the media platform Clubhouse. Below I summarize a few of his points.

Quantum-computing enthusiasts have declared that the technology will supercharge machine learning. It will revolutionize the simulation of complex phenomena in chemistry, neuroscience, medicine, economics and other fields. It will solve the traveling-salesman problem and other conundrums that resist solution by conventional computers. Its still not clear whether quantum computing will achieve these goals, Aaronson says, adding that optimists might be in for a rude awakening.

Popular accounts often imply that quantum computers, because superposition and entanglement allow them to carry out multiple computations at the same time, are simply faster versions of conventional computers. Those accounts are misleading, Aaronson says. Compared to conventional computers, quantum computers are unnatural devices that might be best suited to a relatively narrow range of applications, notably simulating systems dominated by quantum effects.

The ability of a quantum computer to surpass the fastest conventional machine is known as quantum supremacy, a phrase coined by physicist John Preskill in 2012. Demonstrating quantum supremacy is extremely difficult. Even in conventional computing, proving that your algorithm beats mine isnt straightforward. You must pick a task that represents a fair test and choose valid methods of measuring speed and accuracy. The outcomes of tests are also prone to misinterpretation and confirmation bias. Testing creates an enormous space for mischief, Aaronson says.

Moreover, the hardware and software of conventional computers keeps improving. By the time quantum computers are ready for the marketplace, they might lose potential customersif, for example, classical computers become powerful enough to simulate the quantum systems that chemists and materials scientists actually care about in real life, Aaronson says. Although quantum computers would retain their theoretical advantage, their practical impact would be less.

As quantum computing attracts more attention and funding, Aaronson says, researchers may mislead investors, government agencies, journalists, the public and, worst of all, themselves about their works potential. If researchers cant keep their promises, excitement might give way to doubt, disappointment and anger, Aaronson warns. The field might lose funding and talent and lapse into a quantum-computer winter like those that have plagued artificial intelligence.

Lots of other technologiesgenetic engineering, high-temperature superconductors, nanotechnology and fusion energy come to mindhave gone through phases of irrational exuberance. But something about quantum computing makes it especially prone to hype, Aaronson suggests, perhaps because quantum stands for something cool you shouldnt be able to understand.

And that brings me back to Terry Rudolph. In January, after reading about my struggle to understand the Schrdinger equation, Rudolph emailed me to suggest that I read Q Is for Quantum. The 153-page book explains quantum mechanics with a little arithmetic and algebra and lots of diagrams of black-and-white balls going in and out of boxes. Q Is for Quantum has given me more insight into quantum mechanics, and quantum computing, than anything Ive ever read.

Rudolph begins by outlining simple rules underlying conventional computing, which allow for the manipulation of bits. He then shifts to the odd rules of quantum computing, which stem from superposition and entanglement. He details how quantum computing can solve a specific problemone involving thieves stealing code-protected gold bars from a vault--much more readily than conventional computing. But he emphasizes, like Aaronson, that the technology has limits; it cannot compute the uncomputable.

After I read Q Is for Quantum, Rudolph patiently answered my questions about it. You can find our exchange (which assumes familiarity with the book) here. He also answered my questions about PsiQuantum, the firm he co-founded in 2016, which until recently has avoided publicity. Although he is wittily modest about his talents as a physicist (which adds to the charm of Q Is for Quantum), Rudolph is boosterish about PsiQuantum. He shares Aaronsons concerns about hype, and the difficulties of establishing quantum supremacy, but he says those concerns do not apply to PsiQuantum.

The company, he says, is closer than any other firm by a very large margin to building a useful quantum computer, one that solves an impactful problem that we would not have been able to solve otherwise (e.g., something from quantum chemistry which has real-world uses). He adds, Obviously, I have biases, and people will naturally discount my opinions. But I have spent a lot oftime quantitatively comparing what we are doing to others.

Rudolph and other experts contend that a useful quantum computer with robust error-correction will require millions of qubits. PsiQuantum, which constructs qubits out of light, expects by the middle of the decade to be building fault-tolerant quantum computers with fully manufactured components capable of scaling to a million or morequbits, Rudolph says. PsiQuantum has partnered with the semiconductor manufacturer GlobalFoundries to achieve its goal. The machines will be room-sized, comparable to supercomputers or data centers. Most users will access the computers remotely.

Could PsiQuantum really be leading all the competition by a wide margin, as Rudolph claims? Can it really produce a commercially viable machine by 2025? I dont know. Quantum mechanics and quantum computing still baffle me. Im certainly not going to advise my friend or anyone else to invest in quantum computers. But I trust Rudolph, just as I trust Aaronson.

Way back in 1994, I wrote a brief report for Scientific American on quantum computers, noting that they could, in principle, perform tasks beyond the range of any classical device. Ive been intrigued by quantum computing ever since. If this technology gives scientists more powerful tools for simulating complex phenomena, and especially the quantum weirdness at the heart of things, maybe it will give science the jump start it badly needs. Who knows? I hope PsiQuantum helps quantum computing live up to the hype.

This is an opinion and analysis article.

Further Reading:

Will Artificial Intelligence Ever Live Up to Its Hype?

Is the Schrdinger Equation True?

Quantum Mechanics, the Chinese Room Experiment and the Limits of Understanding

Quantum Mechanics, the Mind-Body Problem and Negative Theology

For more ruminations on quantum mechanics, see my new bookPay Attention: Sex, Death, and Science and Tragedy and Telepathy, a chapter in my free online bookMind-Body Problems.

View original post here:

Will Quantum Computing Ever Live Up to Its Hype? - Scientific American

Written by admin |

April 24th, 2021 at 1:56 am

Posted in Quantum Computer


Page 469«..1020..468469470471..480490..»



matomo tracker