Page 1,231«..1020..1,2301,2311,2321,233..1,2401,250..»

Over 600 take the Delaware River plunge to benefit Special Olympics (PHOTOS) – lehighvalleylive.com

Posted: February 22, 2020 at 8:46 pm


Wearing an orange DOC jumpsuit, Chris Adamcik strolled around Eastons Scott Park handcuffed to his son, 14-year-old Zeven Adamcik, who wore a shirt emblazoned with POLICE.

They were members of the Chillie Willies team getting ready for the eighth annual Lehigh Valley Polar Plunge into the Delaware River, and the team's theme for 2020 was cops and robbers.

"It'll be down to shorts when it's time to go in the water, but for now we've got our costumes going," saiid Chris Adamcik.

Zeven Adamcik is a Special Olympics athlete, playing basketball and baseball, and Special Olympics is the reason the Salisbury Township duo was about to join around 600 others in a river in February.

"We fundraise so there's no cost to our athletes or their families to compete in any of the sports or programming that we offer," said Amanda Sechrist, manager of Northampton County Special Olympics.

When the first of 13 groups of plungers stepped into the river, the air temperature was about 50 degrees with sunny skies. But the water was about 36 degrees, according to a thermometer in a nearby anglers boat, and a steady wind was gusting to about 23 mph. Firefighters from Easton watched onshore and aboard a rescue boat.

Im numb, Im very cold, but it was definitely worth it, said Brianna Groff, an employee of Lehigh Valley Polar Plunge sponsor Wawa, as she raced for her towel.

It was awesome, said 25th Street Wawa worker Joshua Shutt. Way colder than I thought. Definitely was not ready for that.

The local plunges first seven years raised about $640,000, an organizer said. Saturdays event raised an additional $100,000. Participants needed to contribute at least $50, although they can accept pledges online through the end of February. Super plungers had to raise $500 apiece for the right to jump every hour for 24 hours into the indoor pool around the corner at Grand Eastonian Hotel & Suites.

"Team Quack Attack!" members Karissa Hensel, Amanda Haese and Patti Shane were among those who jumped all night into the pool then into the river on Saturday. All three are special education teachers at Middle Smithfield Elementary School through Colonial Intermediate Unit 20.

"I think this was the coldest one," said Hensel, a veteran plunger. "I was expecting it to be cold but I think it was a little colder than I anticipated."

"Very chilly but refreshing," Shane said.

Special Olympics is marking its 50th year in 2020 providing year-round training and activities for children and adults with intellectual and physical disabilities.

"Abilities outweigh disabilities" was the theme of an Easton Area School District team that included special education administrator Elizabeth Brill and high school senior Samantha Kessler.

"The reason why we are doing this is to support our students with special needs," said plunger Tracie Stump, a special education teacher at Shawnee Elementary School in Forks Township. "We feel that as a community and as teachers and students of the Easton Area School District, that it is our responsibility to really support our athletes."

Employees of Avantor in Lopatcong Township with United Steel Workers Local 10-00729 came out for the plunge wearing matching Steel Force Chillers black-hooded sweatshirts. It wasnt the first plunge for local President Tim Sutter.

The minute I said yes, Id do this, I started thinking back to 2016 and how painful it actually is to go in that water, he said. But its for a good cause.

Kurt Bresswein may be reached at kbresswein@lehighvalleylive.com. Follow him on Twitter @KurtBresswein and Facebook. Find lehighvalleylive.com on Facebook.

See original here:
Over 600 take the Delaware River plunge to benefit Special Olympics (PHOTOS) - lehighvalleylive.com

Written by admin |

February 22nd, 2020 at 8:46 pm

Posted in Online Education

This 24-Year-Old Makes $750K Teaching Women How To Make Money On Instagram – Forbes

Posted: at 8:46 pm


Karrie Brady makes $750K annually teaching women how to monetize their knowledge base.

The business of the future is right at our fingertips.

If you follow anyone with a substantial fanbase, youre probably already familiar with the typical approaches most take to monetize influence: brand deals, endorsements, and sponsored content.

Karrie Brady, a 24-year-old business coach and sales expert, has a different idea.

Brady, whose business is currently bringing in $750K annually, teaches women how to become coaches, educators, and authorities within their respective fields. In doing so, she shows them how to turn their expertise into something that can help others and build their income, too.

The opportunities to capitalize on this, she believes, are limitless.

After leaving school for biomedical engineering, Brady returned home to take care of her her father following an accident. Needing a way to make money but remain remote, she began her business as a fitness and health coach at just 19-years-old. Her selling power became something of notoriety, and soon influencers were hiring her to sell their own products.

Today, Bradys own clients utilize her expertise through one of the following:

She explains that the entirety of her income is either generated from one of those modules, or in-person speaking events.

Brady believes that women from all walks of life have the power and potential to monetize their skillsets in a similar way. There are probably 40 different ways that people can get into online education. There's coaching, they can create courses or memberships, e-books are so common, too, she explains. There are so many opportunities. A gardener could be an educator. You could create a course or book called How To Take Care Of The 10 Most Popular Houseplants.

Any skillset can be turned into education, Karrie Brady says.

To date, some of Bradys biggest successes include one woman who, in her first year of coaching, grossed $220K and saved $120K of it. Another was a photographer who transitioned to coaching and earned an additional $75K in her first year.

However, its not just about learning how to package your knowledge into a course, book, or coaching program. Its first about learning how to position, market and brand yourself to draw in potential clients in the first place.

I think what people need to realize is that in today's day-in-age, they want to buy from someone they are connected to. They want to be able to stand behind the brand, Brady shares. When you're positing yourself as an authority and building up a social media presence, you are humanizing your business. It allows people to feel more invested in you and it allows people to stand behind your brand in more ways than just the product.

To do this, Brady helps her clients with everything from the magic formula to writing an Instagram bio, which photos are more appealing (she argues that straight-on is most inviting, second best is when your head is turned toward the follow button, as sort of a subliminal nod). She also coaches on making all content SEO-optimized, how to do your captions the correct way, or how to nail the exact verbiage that would appeal to a potential client.

There are three people youre selling to, Brady explains. The person who doesn't even know that their problem exists; the person who knows the problem but not the solution; and the person who knows the problem and the solution. The last one is who you are positing the offer to. According to Brady, its essential to get into the headspace of each. Over time, youre nurturing them to become clients.

Aside from tech glitches and poor branding, Brady shares that the biggest obstacle she sees women facing is the dreaded imposter syndrome. Its an issue, she says, that requires a lot of work to overcome. People feel like they are not enough, they are not ready. If you're ready, you've waited too long. There's so much power that you have. You only need to be two steps ahead of someone to effectively coach them.

Any skillset can be turned into education, Brady says.

There are billions of people in the world, and I can think out of the top off my head there are probably 10 people in their current audience that would love to learn from you.

Continued here:
This 24-Year-Old Makes $750K Teaching Women How To Make Money On Instagram - Forbes

Written by admin |

February 22nd, 2020 at 8:46 pm

Posted in Online Education

What is machine learning? Everything you need to know | ZDNet

Posted: at 8:45 pm


Machine learning is enabling computers to tackle tasks that have, until now, only been carried out by people.

From driving cars to translating speech, machine learning is driving an explosion in the capabilities of artificial intelligence -- helping software make sense of the messy and unpredictable real world.

But what exactly is machine learning and what is making the current boom in machine learning possible?

At a very high level, machine learning is the process of teaching a computer system how to make accurate predictions when fed data.

Those predictions could be answering whether a piece of fruit in a photo is a banana or an apple, spotting people crossing the road in front of a self-driving car, whether the use of the word book in a sentence relates to a paperback or a hotel reservation, whether an email is spam, or recognizing speech accurately enough to generate captions for a YouTube video.

The key difference from traditional computer software is that a human developer hasn't written code that instructs the system how to tell the difference between the banana and the apple.

Instead a machine-learning model has been taught how to reliably discriminate between the fruits by being trained on a large amount of data, in this instance likely a huge number of images labelled as containing a banana or an apple.

Data, and lots of it, is the key to making machine learning possible.

Machine learning may have enjoyed enormous success of late, but it is just one method for achieving artificial intelligence.

At the birth of the field of AI in the 1950s, AI was defined as any machine capable of performing a task that would typically require human intelligence.

AI systems will generally demonstrate at least some of the following traits: planning, learning, reasoning, problem solving, knowledge representation, perception, motion, and manipulation and, to a lesser extent, social intelligence and creativity.

Alongside machine learning, there are various other approaches used to build AI systems, including evolutionary computation, where algorithms undergo random mutations and combinations between generations in an attempt to "evolve" optimal solutions, and expert systems, where computers are programmed with rules that allow them to mimic the behavior of a human expert in a specific domain, for example an autopilot system flying a plane.

Machine learning is generally split into two main categories: supervised and unsupervised learning.

This approach basically teaches machines by example.

During training for supervised learning, systems are exposed to large amounts of labelled data, for example images of handwritten figures annotated to indicate which number they correspond to. Given sufficient examples, a supervised-learning system would learn to recognize the clusters of pixels and shapes associated with each number and eventually be able to recognize handwritten numbers, able to reliably distinguish between the numbers 9 and 4 or 6 and 8.

However, training these systems typically requires huge amounts of labelled data, with some systems needing to be exposed to millions of examples to master a task.

As a result, the datasets used to train these systems can be vast, with Google's Open Images Dataset having about nine million images, its labeled video repository YouTube-8M linking to seven million labeled videos and ImageNet, one of the early databases of this kind, having more than 14 million categorized images. The size of training datasets continues to grow, with Facebook recently announcing it had compiled 3.5 billion images publicly available on Instagram, using hashtags attached to each image as labels. Using one billion of these photos to train an image-recognition system yielded record levels of accuracy -- of 85.4 percent -- on ImageNet's benchmark.

The laborious process of labeling the datasets used in training is often carried out using crowdworking services, such as Amazon Mechanical Turk, which provides access to a large pool of low-cost labor spread across the globe. For instance, ImageNet was put together over two years by nearly 50,000 people, mainly recruited through Amazon Mechanical Turk. However, Facebook's approach of using publicly available data to train systems could provide an alternative way of training systems using billion-strong datasets without the overhead of manual labeling.

In contrast, unsupervised learning tasks algorithms with identifying patterns in data, trying to spot similarities that split that data into categories.

An example might be Airbnb clustering together houses available to rent by neighborhood, or Google News grouping together stories on similar topics each day.

The algorithm isn't designed to single out specific types of data, it simply looks for data that can be grouped by its similarities, or for anomalies that stand out.

The importance of huge sets of labelled data for training machine-learning systems may diminish over time, due to the rise of semi-supervised learning.

As the name suggests, the approach mixes supervised and unsupervised learning. The technique relies upon using a small amount of labelled data and a large amount of unlabelled data to train systems. The labelled data is used to partially train a machine-learning model, and then that partially trained model is used to label the unlabelled data, a process called pseudo-labelling. The model is then trained on the resulting mix of the labelled and pseudo-labelled data.

The viability of semi-supervised learning has been boosted recently by Generative Adversarial Networks ( GANs), machine-learning systems that can use labelled data to generate completely new data, for example creating new images of Pokemon from existing images, which in turn can be used to help train a machine-learning model.

Were semi-supervised learning to become as effective as supervised learning, then access to huge amounts of computing power may end up being more important for successfully training machine-learning systems than access to large, labelled datasets.

A way to understand reinforcement learning is to think about how someone might learn to play an old school computer game for the first time, when they aren't familiar with the rules or how to control the game. While they may be a complete novice, eventually, by looking at the relationship between the buttons they press, what happens on screen and their in-game score, their performance will get better and better.

An example of reinforcement learning is Google DeepMind's Deep Q-network, which has beaten humans in a wide range of vintage video games. The system is fed pixels from each game and determines various information about the state of the game, such as the distance between objects on screen. It then considers how the state of the game and the actions it performs in game relate to the score it achieves.

Over the process of many cycles of playing the game, eventually the system builds a model of which actions will maximize the score in which circumstance, for instance, in the case of the video game Breakout, where the paddle should be moved to in order to intercept the ball.

Everything begins with training a machine-learning model, a mathematical function capable of repeatedly modifying how it operates until it can make accurate predictions when given fresh data.

Before training begins, you first have to choose which data to gather and decide which features of the data are important.

A hugely simplified example of what data features are is given in this explainer by Google, where a machine learning model is trained to recognize the difference between beer and wine, based on two features, the drinks' color and their alcoholic volume (ABV).

Each drink is labelled as a beer or a wine, and then the relevant data is collected, using a spectrometer to measure their color and hydrometer to measure their alcohol content.

An important point to note is that the data has to be balanced, in this instance to have a roughly equal number of examples of beer and wine.

The gathered data is then split, into a larger proportion for training, say about 70 percent, and a smaller proportion for evaluation, say the remaining 30 percent. This evaluation data allows the trained model to be tested to see how well it is likely to perform on real-world data.

Before training gets underway there will generally also be a data-preparation step, during which processes such as deduplication, normalization and error correction will be carried out.

The next step will be choosing an appropriate machine-learning model from the wide variety available. Each have strengths and weaknesses depending on the type of data, for example some are suited to handling images, some to text, and some to purely numerical data.

Basically, the training process involves the machine-learning model automatically tweaking how it functions until it can make accurate predictions from data, in the Google example, correctly labeling a drink as beer or wine when the model is given a drink's color and ABV.

A good way to explain the training process is to consider an example using a simple machine-learning model, known as linear regression with gradient descent. In the following example, the model is used to estimate how many ice creams will be sold based on the outside temperature.

Imagine taking past data showing ice cream sales and outside temperature, and plotting that data against each other on a scatter graph -- basically creating a scattering of discrete points.

To predict how many ice creams will be sold in future based on the outdoor temperature, you can draw a line that passes through the middle of all these points, similar to the illustration below.

Once this is done, ice cream sales can be predicted at any temperature by finding the point at which the line passes through a particular temperature and reading off the corresponding sales at that point.

Bringing it back to training a machine-learning model, in this instance training a linear regression model would involve adjusting the vertical position and slope of the line until it lies in the middle of all of the points on the scatter graph.

At each step of the training process, the vertical distance of each of these points from the line is measured. If a change in slope or position of the line results in the distance to these points increasing, then the slope or position of the line is changed in the opposite direction, and a new measurement is taken.

In this way, via many tiny adjustments to the slope and the position of the line, the line will keep moving until it eventually settles in a position which is a good fit for the distribution of all these points, as seen in the video below. Once this training process is complete, the line can be used to make accurate predictions for how temperature will affect ice cream sales, and the machine-learning model can be said to have been trained.

While training for more complex machine-learning models such as neural networks differs in several respects, it is similar in that it also uses a "gradient descent" approach, where the value of "weights" that modify input data are repeatedly tweaked until the output values produced by the model are as close as possible to what is desired.

Once training of the model is complete, the model is evaluated using the remaining data that wasn't used during training, helping to gauge its real-world performance.

To further improve performance, training parameters can be tuned. An example might be altering the extent to which the "weights" are altered at each step in the training process.

A very important group of algorithms for both supervised and unsupervised machine learning are neural networks. These underlie much of machine learning, and while simple models like linear regression used can be used to make predictions based on a small number of data features, as in the Google example with beer and wine, neural networks are useful when dealing with large sets of data with many features.

Neural networks, whose structure is loosely inspired by that of the brain, are interconnected layers of algorithms, called neurons, which feed data into each other, with the output of the preceding layer being the input of the subsequent layer.

Each layer can be thought of as recognizing different features of the overall data. For instance, consider the example of using machine learning to recognize handwritten numbers between 0 and 9. The first layer in the neural network might measure the color of the individual pixels in the image, the second layer could spot shapes, such as lines and curves, the next layer might look for larger components of the written number -- for example, the rounded loop at the base of the number 6. This carries on all the way through to the final layer, which will output the probability that a given handwritten figure is a number between 0 and 9.

See more: Special report: How to implement AI and machine learning (free PDF)

The network learns how to recognize each component of the numbers during the training process, by gradually tweaking the importance of data as it flows between the layers of the network. This is possible due to each link between layers having an attached weight, whose value can be increased or decreased to alter that link's significance. At the end of each training cycle the system will examine whether the neural network's final output is getting closer or further away from what is desired -- for instance is the network getting better or worse at identifying a handwritten number 6. To close the gap between between the actual output and desired output, the system will then work backwards through the neural network, altering the weights attached to all of these links between layers, as well as an associated value called bias. This process is called back-propagation.

Eventually this process will settle on values for these weights and biases that will allow the network to reliably perform a given task, such as recognizing handwritten numbers, and the network can be said to have "learned" how to carry out a specific task

An illustration of the structure of a neural network and how training works.

A subset of machine learning is deep learning, where neural networks are expanded into sprawling networks with a huge number of layers that are trained using massive amounts of data. It is these deep neural networks that have fueled the current leap forward in the ability of computers to carry out task like speech recognition and computer vision.

There are various types of neural networks, with different strengths and weaknesses. Recurrent neural networks are a type of neural net particularly well suited to language processing and speech recognition, while convolutional neural networks are more commonly used in image recognition. The design of neural networks is also evolving, with researchers recently devising a more efficient design for an effective type of deep neural network called long short-term memory or LSTM, allowing it to operate fast enough to be used in on-demand systems like Google Translate.

The AI technique of evolutionary algorithms is even being used to optimize neural networks, thanks to a process called neuroevolution. The approach was recently showcased by Uber AI Labs, which released papers on using genetic algorithms to train deep neural networks for reinforcement learning problems.

While machine learning is not a new technique, interest in the field has exploded in recent years.

This resurgence comes on the back of a series of breakthroughs, with deep learning setting new records for accuracy in areas such as speech and language recognition, and computer vision.

What's made these successes possible are primarily two factors, one being the vast quantities of images, speech, video and text that is accessible to researchers looking to train machine-learning systems.

But even more important is the availability of vast amounts of parallel-processing power, courtesy of modern graphics processing units (GPUs), which can be linked together into clusters to form machine-learning powerhouses.

Today anyone with an internet connection can use these clusters to train machine-learning models, via cloud services provided by firms like Amazon, Google and Microsoft.

As the use of machine-learning has taken off, so companies are now creating specialized hardware tailored to running and training machine-learning models. An example of one of these custom chips is Google's Tensor Processing Unit (TPU), the latest version of which accelerates the rate at which machine-learning models built using Google's TensorFlow software library can infer information from data, as well as the rate at which they can be trained.

These chips are not just used to train models for Google DeepMind and Google Brain, but also the models that underpin Google Translate and the image recognition in Google Photo, as well as services that allow the public to build machine learning models using Google's TensorFlow Research Cloud. The second generation of these chips was unveiled at Google's I/O conference in May last year, with an array of these new TPUs able to train a Google machine-learning model used for translation in half the time it would take an array of the top-end GPUs, and the recently announced third-generation TPUs able to accelerate training and inference even further.

As hardware becomes increasingly specialized and machine-learning software frameworks are refined, it's becoming increasingly common for ML tasks to be carried out on consumer-grade phones and computers, rather than in cloud datacenters. In the summer of 2018, Google took a step towards offering the same quality of automated translation on phones that are offline as is available online, by rolling out local neural machine translation for 59 languages to the Google Translate app for iOS and Android.

Perhaps the most famous demonstration of the efficacy of machine-learning systems was the 2016 triumph of the Google DeepMind AlphaGo AI over a human grandmaster in Go, a feat that wasn't expected until 2026. Go is an ancient Chinese game whose complexity bamboozled computers for decades. Go has about 200 moves per turn, compared to about 20 in Chess. Over the course of a game of Go, there are so many possible moves that searching through each of them in advance to identify the best play is too costly from a computational standpoint. Instead, AlphaGo was trained how to play the game by taking moves played by human experts in 30 million Go games and feeding them into deep-learning neural networks.

Training the deep-learning networks needed can take a very long time, requiring vast amounts of data to be ingested and iterated over as the system gradually refines its model in order to achieve the best outcome.

However, more recently Google refined the training process with AlphaGo Zero, a system that played "completely random" games against itself, and then learnt from the results. At last year's prestigious Neural Information Processing Systems (NIPS) conference, Google DeepMind CEO Demis Hassabis revealed AlphaGo had also mastered the games of chess and shogi.

DeepMind continue to break new ground in the field of machine learning. In July 2018, DeepMind reported that its AI agents had taught themselves how to play the 1999 multiplayer 3D first-person shooter Quake III Arena, well enough to beat teams of human players. These agents learned how to play the game using no more information than the human players, with their only input being the pixels on the screen as they tried out random actions in game, and feedback on their performance during each game.

More recently DeepMind demonstrated an AI agent capable of superhuman performance across multiple classic Atari games, an improvement over earlier approaches where each AI agent could only perform well at a single game. DeepMind researchers say these general capabilities will be important if AI research is to tackle more complex real-world domains.

Machine learning systems are used all around us, and are a cornerstone of the modern internet.

Machine-learning systems are used to recommend which product you might want to buy next on Amazon or video you want to may want to watch on Netflix.

Every Google search uses multiple machine-learning systems, to understand the language in your query through to personalizing your results, so fishing enthusiasts searching for "bass" aren't inundated with results about guitars. Similarly Gmail's spam and phishing-recognition systems use machine-learning trained models to keep your inbox clear of rogue messages.

One of the most obvious demonstrations of the power of machine learning are virtual assistants, such as Apple's Siri, Amazon's Alexa, the Google Assistant, and Microsoft Cortana.

Each relies heavily on machine learning to support their voice recognition and ability to understand natural language, as well as needing an immense corpus to draw upon to answer queries.

But beyond these very visible manifestations of machine learning, systems are starting to find a use in just about every industry. These exploitations include: computer vision for driverless cars, drones and delivery robots; speech and language recognition and synthesis for chatbots and service robots; facial recognition for surveillance in countries like China; helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs in healthcare; allowing for predictive maintenance on infrastructure by analyzing IoT sensor data; underpinning the computer vision that makes the cashierless Amazon Go supermarket possible, offering reasonably accurate transcription and translation of speech for business meetings -- the list goes on and on.

Deep-learning could eventually pave the way for robots that can learn directly from humans, with researchers from Nvidia recently creating a deep-learning system designed to teach a robot to how to carry out a task, simply by observing that job being performed by a human.

As you'd expect, the choice and breadth of data used to train systems will influence the tasks they are suited to.

For example, in 2016 Rachael Tatman, a National Science Foundation Graduate Research Fellow in the Linguistics Department at the University of Washington, found that Google's speech-recognition system performed better for male voices than female ones when auto-captioning a sample of YouTube videos, a result she ascribed to 'unbalanced training sets' with a preponderance of male speakers.

As machine-learning systems move into new areas, such as aiding medical diagnosis, the possibility of systems being skewed towards offering a better service or fairer treatment to particular groups of people will likely become more of a concern.

A heavily recommended course for beginners to teach themselves the fundamentals of machine learning is this free Stanford University and Coursera lecture series by AI expert and Google Brain founder Andrew Ng.

Another highly-rated free online course, praised for both the breadth of its coverage and the quality of its teaching, is this EdX and Columbia University introduction to machine learning, although students do mention it requires a solid knowledge of math up to university level.

Technologies designed to allow developers to teach themselves about machine learning are increasingly common, from AWS' deep-learning enabled camera DeepLens to Google's Raspberry Pi-powered AIY kits.

All of the major cloud platforms -- Amazon Web Services, Microsoft Azure and Google Cloud Platform -- provide access to the hardware needed to train and run machine-learning models, with Google letting Cloud Platform users test out its Tensor Processing Units -- custom chips whose design is optimized for training and running machine-learning models.

This cloud-based infrastructure includes the data stores needed to hold the vast amounts of training data, services to prepare that data for analysis, and visualization tools to display the results clearly.

Newer services even streamline the creation of custom machine-learning models, with Google recently revealing a service that automates the creation of AI models, called Cloud AutoML. This drag-and-drop service builds custom image-recognition models and requires the user to have no machine-learning expertise, similar to Microsoft's Azure Machine Learning Studio. In a similar vein, Amazon recently unveiled new AWS offerings designed to accelerate the process of training up machine-learning models.

For data scientists, Google's Cloud ML Engine is a managed machine-learning service that allows users to train, deploy and export custom machine-learning models based either on Google's open-sourced TensorFlow ML framework or the open neural network framework Keras, and which now can be used with the Python library sci-kit learn and XGBoost.

Database admins without a background in data science can use Google's BigQueryML, a beta service that allows admins to call trained machine-learning models using SQL commands, allowing predictions to be made in database, which is simpler than exporting data to a separate machine learning and analytics environment.

For firms that don't want to build their own machine-learning models, the cloud platforms also offer AI-powered, on-demand services -- such as voice, vision, and language recognition. Microsoft Azure stands out for the breadth of on-demand services on offer, closely followed by Google Cloud Platform and then AWS.

Meanwhile IBM, alongside its more general on-demand offerings, is also attempting to sell sector-specific AI services aimed at everything from healthcare to retail, grouping these offerings together under its IBM Watson umbrella.

Early in 2018, Google expanded its machine-learning driven services to the world of advertising, releasing a suite of tools for making more effective ads, both digital and physical.

While Apple doesn't enjoy the same reputation for cutting edge speech recognition, natural language processing and computer vision as Google and Amazon, it is investing in improving its AI services, recently putting Google's former chief in charge of machine learning and AI strategy across the company, including the development of its assistant Siri and its on-demand machine learning service Core ML.

In September 2018, NVIDIA launched a combined hardware and software platform designed to be installed in datacenters that can accelerate the rate at which trained machine-learning models can carry out voice, video and image recognition, as well as other ML-related services.

The NVIDIA TensorRT Hyperscale Inference Platform uses NVIDIA Tesla T4 GPUs, which delivers up to 40x the performance of CPUs when using machine-learning models to make inferences from data, and the TensorRT software platform, which is designed to optimize the performance of trained neural networks.

There are a wide variety of software frameworks for getting started with training and running machine-learning models, typically for the programming languages Python, R, C++, Java and MATLAB.

Famous examples include Google's TensorFlow, the open-source library Keras, the Python library Scikit-learn, the deep-learning framework CAFFE and the machine-learning library Torch.

Read the original:

What is machine learning? Everything you need to know | ZDNet

Written by admin |

February 22nd, 2020 at 8:45 pm

Posted in Machine Learning

Why 2020 will be the Year of Automated Machine Learning – Gigabit Magazine – Technology News, Magazine and Website

Posted: at 8:45 pm


As the fuel that powers their ongoing digital transformation efforts, businesses everywhere are looking for ways to derive as much insight as possible from their data. The accompanying increased demand for advanced predictive and prescriptive analytics has, in turn, led to a call for more data scientists proficient with the latest artificial intelligence (AI) and machine learning (ML) tools.

But such highly-skilled data scientists are expensive and in short supply. In fact, theyre such a precious resource that the phenomenon of the citizen data scientist has recently arisen to help close the skills gap. A complementary role, rather than a direct replacement, citizen data scientists lack specific advanced data science expertise. However, they are capable of generating models using state-of-the-art diagnostic and predictive analytics. And this capability is partly due to the advent of accessible new technologies such as automated machine learning (AutoML) that now automate many of the tasks once performed by data scientists.

Algorithms and automation

According to a recent Harvard Business Review article, Organisations have shifted towards amplifying predictive power by coupling big data with complex automated machine learning. AutoML, which uses machine learning to generate better machine learning, is advertised as affording opportunities to democratise machine learning by allowing firms with limited data science expertise to develop analytical pipelines capable of solving sophisticated business problems.

Comprising a set of algorithms that automate the writing of other ML algorithms, AutoML automates the end-to-end process of applying ML to real-world problems. By way of illustration, a standard ML pipeline is made up of the following: data pre-processing, feature extraction, feature selection, feature engineering, algorithm selection, and hyper-parameter tuning. But the considerable expertise and time it takes to implement these steps means theres a high barrier to entry.

AutoML removes some of these constraints. Not only does it significantly reduce the time it would typically take to implement an ML process under human supervision, it can also often improve the accuracy of the model in comparison to hand-crafted models, trained and deployed by humans. In doing so, it offers organisations a gateway into ML, as well as freeing up the time of ML engineers and data practitioners, allowing them to focus on higher-order challenges.

SEE ALSO:

Overcoming scalability problems

The trend for combining ML with Big Data for advanced data analytics began back in 2012, when deep learning became the dominant approach to solving ML problems. This approach heralded the generation of a wealth of new software, tooling, and techniques that altered both the workload and the workflow associated with ML on a large scale. Entirely new ML toolsets, such as TensorFlow and PyTorch were created, and people increasingly began to engage more with graphics processing units (GPUs) to accelerate their work.

Until this point, companies efforts had been hindered by the scalability problems associated with running ML algorithms on huge datasets. Now, though, they were able to overcome these issues. By quickly developing sophisticated internal tooling capable of building world-class AI applications, the BigTech powerhouses soon overtook their Fortune 500 peers when it came to realising the benefits of smarter data-driven decision-making and applications.

Insight, innovation and data-driven decisions

AutoML represents the next stage in MLs evolution, promising to help non-tech companies access the capabilities they need to quickly and cheaply build ML applications.

In 2018, for example, Google launched its Cloud AutoML. Based on Neural Architecture Search (NAS) and transfer learning, it was described by Google executives as having the potential to make AI experts even more productive, advance new fields in AI, and help less-skilled engineers build powerful AI systems they previously only dreamed of.

The one downside to Googles AutoML is that its a proprietary algorithm. There are, however, a number of alternative open-source AutoML libraries such as AutoKeras, developed by researchers at Texas University and used to power the NAS algorithm.

Technological breakthroughs such as these have given companies the capability to easily build production-ready models without the need for expensive human resources. By leveraging AI, ML, and deep learning capabilities, AutoML gives businesses across all industries the opportunity to benefit from data-driven applications powered by statistical models - even when advanced data science expertise is scarce.

With organisations increasingly reliant on civilian data scientists, 2020 is likely to be the year that enterprise adoption of AutoML will start to become mainstream. Its ease of access will compel business leaders to finally open the black box of ML, thereby elevating their knowledge of its processes and capabilities. AI and ML tools and practices will become ever more ingrained in businesses everyday thinking and operations as they become more empowered to identify those projects whose invaluable insight will drive better decision-making and innovation.

By Senthil Ravindran, EVP and global head of cloud transformation and digital innovation, Virtusa

Read the original post:

Why 2020 will be the Year of Automated Machine Learning - Gigabit Magazine - Technology News, Magazine and Website

Written by admin |

February 22nd, 2020 at 8:45 pm

Posted in Machine Learning

Machine Learning: Real-life applications and it’s significance in Data Science – Techstory

Posted: at 8:44 pm


Do you know how Google Maps predicts traffic? Are you amused by how Amazon Prime or Netflix subscribes to you just the movie you would watch? We all know it must be some approach of Artificial Intelligence. Machine Learning involves algorithms and statistical models to perform tasks. This same approach is used to find faces in Facebook and detect cancer too. A Machine Learning course can educate in the development and application of such models.

Artificial Intelligence mimics human intelligence. Machine Learning is one of the significant branches of it. There is an ongoing and increasing need for its development.

Tasks as simple as Spam detection in Gmail illustrates its significance in our day-to-day lives. That is why the roles of Data scientists are in demand to yield more productivity at present. An aspiring data scientist can learn to develop algorithms and apply such by availing Machine Learning certification.

Machine learning as a subset of Artificial Intelligence, is applied for varied purposes. There is a misconception that applying Machine Learning algorithms would need a prior mathematical knowledge. But, a Machine Learning Online course would suggest otherwise. On contrary to the popular approach of studying, here top-to-bottom approach is involved. An aspiring data scientist, a business person or anyone can learn how to apply statistical models for various purposes. Here, is a list of some well-known applications of Machine Learning.

Microsofts research lab uses Machine Learning to study cancer. This helps in Individualized oncological treatment and detailed progress reports generation. The data engineers apply pattern recognition, Natural Language Processing and Computer vision algorithms to work through large data. This aids oncologists to conduct precise and breakthrough tests.

Likewise, machine learning is applied in biomedical engineering. This has led to automation of diagnostic tools. Such tools are used in detecting neurological and psychiatric disorders of many sorts.

We all have had a conversation with Siri or Alexa. They use speech recognition to input our requests. Machine Learning is applied here to auto generate responses based on previous data. Hello Barbie is the Siri version for the kids to play with. It uses advanced analytics, machine learning and Natural language processing to respond. This is the first AI enabled toy which could lead to more such inventions.

Google uses Machine Learning statistical models to acquire inputs. The statistical models collect details such as distance from the start point to the endpoint, duration and bus schedules. Such historical data is rescheduled and reused. Machine Learning algorithms are developed with the objective of data prediction. They recognise the pattern between such inputs and predict approximate time delays.

Another well-known application of Google, Google translate involves Machine Learning. Deep learning aids in learning language rules through recorded conversations. Neural networks such as Long-short term memory networks aids in long-term information updates and learning. Recurrent Neural networks identify the sequences of learning. Even bi-lingual processing is made feasible nowadays.

Facebook uses image recognition and computer vision to detect images. Such images are fed as inputs. The statistical models developed using Machine Learning maps any information associated with these images. Facebook generates automated captions for images. These captions are meant to provide directions for visually impaired people. This innovation of Facebook has nudged Data engineers to come up with other such valuable real-time applications.

The aim here is to increase the possibility of the customer, watching a movie recommendation. It is achieved by studying the previous thumbnails. An algorithm is developed to study these thumbnails and derive recommendation results. Every image of available movies has separate thumbnails. A recommendation is generated by pattern recognition among the numerical data. The thumbnails are assigned individual numerical values.

Tesla uses computer vision, data prediction, and path planning for this purpose. The machine learning practices applied makes the innovation stand-out. The deep neural networks work with trained data and generate instructions. Many technological advancements such as changing lanes are instructed based on imitation learning.

Gmail, Yahoo mail and Outlook engage machine learning techniques such as neural networks. These networks detect patterns in historical data. They train on received data about spamming messages and phishing messages. It is noted that these spam filters provide 99.9 percent accuracy.

As people grow more health conscious, the development of fitness monitoring applications are on the rise. Being on top of the market, Fitbit ensures its productivity by the employment of machine learning methods. The trained machine learning models predicts user activities. This is achieved through data pre-processing, data processing and data partitioning. There is a need to improve the application in terms of additional purposes.

The above mentioned applications are like the tip of an iceberg. Machine learning being a subset of Artificial Intelligence finds its necessity in many other streams of daily activities.

comments

Read more here:

Machine Learning: Real-life applications and it's significance in Data Science - Techstory

Written by admin |

February 22nd, 2020 at 8:44 pm

Posted in Machine Learning

Grok combines Machine Learning and the Human Brain to build smarter AIOps – Diginomica

Posted: at 8:44 pm


A few weeks ago I wrote a piece here about Moogsoft which has been making waves in the service assurance space by applying artificial intelligence and machine learning to the arcane task of keeping on keeping critical IT up and running and lessening the business impact of service interruptions. Its a hot area for startups and Ive since gotten article pitches from several other AIops firms at varying levels of development.

The most intriguing of these is a company called Grok which was formed by a partnership between Numenta, a pioneering AI research firm co-founded by Jeff Hawkins and Donna Dubinsky, who are famous for having started two classic mobile computing companies, Palm and Handspring, and Avik Partners. Avik is a company formed by brothers Casey and Josh Kindiger, two veteran entrepreneurs who have successfully started and grown multiple technology companies in service assurance and automation over the past two decadesmost recently Resolve Systems.

Josh Kindiger told me in a telephone interview how the partnership came about:

Numenta is primarily a research entity started by Jeff and Donna about 15 years ago to support Jeffs ideas about the intersection of neuroscience and data science. About five years ago, they developed an algorithm called HTM and a product called Grok for AWS which monitors servers on a network for anomalies. They werent interested in developing a company around it but we came along and saw a way to link our deep domain experience in the service management and automation areas with their technology. So, we licensed the name and the technology and built part of our Grok AIOps platform around it.

Jeff Hawkins has spent most of his post-Palm and Handspring years trying to figure out how the human brain works and then reverse engineering that knowledge into structures that machines can replicate. His model or theory, called hierarchical temporal memory (HTM), was originally described in his 2004 book On Intelligence written with Sandra Blakeslee. HTM is based on neuroscience and the physiology and interaction of pyramidal neurons in the neocortex of the mammalian (in particular, human) brain. For a little light reading, I recommend a peer-reviewed paper called A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex.

Grok AIOps also uses traditional machine learning, alongside HTM. Said Kindiger:

When I came in, the focus was purely on anomaly detection and I immediately engaged with a lot of my old customers--large fortune 500 companies, very large service providers and quickly found out that while anomaly detection was extremely important, that first signal wasn't going to be enough. So, we transformed Grok into a platform. And essentially what we do is we apply the correct algorithm, whether it's HTM or something else, to the proper stream events, logs and performance metrics. Grok can enable predictive, self-healing operations within minutes.

The Grok AIOps platform uses multiple layers of intelligence to identify issues and support their resolution:

Anomaly detection

The HTM algorithm has proven exceptionally good at detecting and predicting anomalies and reducing noise, often up to 90%, by providing the critical context needed to identify incidents before they happen. It can detect anomalies in signals beyond low and high thresholds, such as signal frequency changes that reflect changes in the behavior of the underlying systems. Said Kindiger:

We believe HTM is the leading anomaly detection engine in the market. In fact, it has consistently been the best performing anomaly detection algorithm in the industry resulting in less noise, less false positives and more accurate detection. It is not only best at detecting an anomaly with the smallest amount of noise but it also scales, which is the biggest challenge.

Anomaly clustering

To help reduce noise, Grok clusters anomalies that belong together through the same event or cause.

Event and log clustering

Grok ingests all the events and logs from the integrated monitors and then applies to it to event and log clustering algorithms, including pattern recognition and dynamic time warping which also reduce noise.

IT operations have become almost impossible for humans alone to manage. Many companies struggle to meet the high demand due to increased cloud complexity. Distributed apps make it difficult to track where problems occur during an IT incident. Every minute of downtime directly impacts the bottom line.

In this environment, the relatively new solution to reduce this burden of IT management, dubbed AIOps, looks like a much needed lifeline to stay afloat. AIOps translates to "Algorithmic IT Operations" and its premise is that algorithms, not humans or traditional statistics, will help to make smarter IT decisions and help ensure application efficiency. AIOps platforms reduce the need for human intervention by using ML to set alerts and automation to resolve issues. Over time, AIOps platforms can learn patterns of behavior within distributed cloud systems and predict disasters before they happen.

Grok detects latent issues with cloud apps and services and triggers automations to troubleshoot these problems before requiring further human intervention. Its technology is solid, its owners have lots of experience in the service assurance and automation spaces, and who can resist the story of the first commercial use of an algorithm modeled on the human brain.

Go here to see the original:

Grok combines Machine Learning and the Human Brain to build smarter AIOps - Diginomica

Written by admin |

February 22nd, 2020 at 8:44 pm

Posted in Machine Learning

Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages – MarTech Series

Posted: at 8:44 pm


The First End-to-End Messaging Visibility Platform Allows Mobile Operators and Internet Service Providers to Identify, Analyze and Prioritize Messages

Syniverse and RealNetworks have announced they have incorporated sophisticated machine learning (ML) features into their integrated offering that gives carriers visibility and control over mobile messaging traffic. By integrating RealNetworks Kontxt application-to-person (A2P) message categorization capabilities into Syniverse Messaging Clarity, mobile network operators (MNOs), internet service providers (ISPs), and messaging aggregators can identify and block spam, phishing, and malicious messages by prioritizing legitimate A2P traffic, better monetizing their service.

At the time of this announcement, Bill Corbin, Senior Vice President of Indirect Markets & Strategic Partnerships, Syniverse said

Syniverse offers companies the capability to use machine learning technologies to gain insight into what traffic is flowing through their networks, while simultaneously ensuring consumer privacy and keeping the actual contents of the messages hidden. The Syniverse Messaging Clarity solution can generate statistics examining the type of traffic sent and whether it deviates from the senders traffic pattern. From there, the technology analyzes if the message is a valid one or spam and blocks the spam.

Read Also: Introducing AudioDots The Automatic Text-to-Speech

Currently, Syniverse helps mobile operators and businesses manage and secure their mobile and network communications, driving better engagements and business outcomes.

Surash Patel, General Manager of Kontxt, RealNetworks added,The self-learning Kontxt algorithms within the Syniverse Messaging Clarity solution allow its threat-assessment techniques to evolve with changes in message traffic. Our analytics also verify that sent messages conform to network standards pertaining to spam and fraud. By deploying Messaging Clarity, MNOs and ISPs can help ensure their compliance with local regulations across the world, including the U.S. Telephone Consumer Protection Act, while also avoiding potential costs associated with violations. And, ultimately, the consumer who is the recipient of more appropriate text messages and less spam wins as well, as our Kontxt technology within the Messaging Clarity solution works to enhance customer trust and improve the overall customer experience.

Syniverse Messaging Clarity, the first end-to-end messaging visibility solution, utilizes the best-in-class grey route firewall, and clearing and settlement tools to maximize messaging revenue streams, better control spam traffic, and closely partner with enterprises. The solution analyzes the delivery of messages before categorizing them into specific groupings, including messages being sent from one person to another person (P2P), A2P messages, or outright spam. Through its existing clearing and settlement capabilities, Messaging Clarity can transform upcoming technologies like Rich Communication Services (RCS) and chatbots into revenue-generating products and services without the clutter and cost of spam or fraud.

The foundational Kontxt technology adds natural language processing and deep learning techniques to Messaging Clarity to continually update and improve its understanding of messages and clarification. This new feature adds to Messaging Claritys ability to identify, categorize, and ascribe a monetary value to the immense volume and complexity of messages that are delivered through text messaging, chatbots, and other channels.

Marketing Technology Updates: MarTechInterview with Laetitia Gazel Anthoine, CEO at HEROW

The Syniverse and RealNetworks Kontxt message classification provide companies the ability to ensure that urgent messages, like one-time passwords, are sent at a premium rate compared with lower-priority notifications, such as promotional offers. The Syniverse Messaging Clarity solution also helps eliminate instances of extreme message spam phishing (smishing). This type of attack recently occurred with a global shipping company when spam texts were sent to consumers with the request to click a link to receive an update on package delivery for a phantom order.

Building on a legacy of digital media expertise and innovation, RealNetworks has created a new generation of products that employ best-in-class artificial intelligence and machine learning to enhance and secure online communication channels.

Read More:How Are Influencers Addressing the Clothing Crisis?

Sudipto is a technology research specialist who brings 11 years of professional blogging and technical writing experience. He has developed cutting-edge content for over 100 websites and mobile applications. Our Wordsmith is an engaging conversationalist and has done more than 200 interviews with some of the leading names in automobile, digital advertising, IT/ITES, medical technology, real estate, gemstone certification, HVAC, tourism and food processing industries. Apart from technical writing, he loves to blow off steam by chronicling stories about top medical professionals, innovators, spiritual 'gurus', gym trainers, nutritionists, wedding planners, chefs and off-beat hobbyists. The best place to find him beyond work hours the shadiest underground gym in the city. He is an ardent sports buff and contributes with his witty live commentary too.

See original here:

Syniverse and RealNetworks Collaboration Brings Kontxt-Based Machine Learning Analytics to Block Spam and Phishing Text Messages - MarTech Series

Written by admin |

February 22nd, 2020 at 8:44 pm

Posted in Machine Learning

How to Pick a Winning March Madness Bracket – Machine Learning Times – machine learning & data science news – The Predictive Analytics Times

Posted: at 8:44 pm


Introduction

In 2019, over 40 million Americans wagered money on March Madness brackets, according to the American Gaming Association. Most of this money was bet in bracket pools, which consist of a group of people each entering their predictions of the NCAA tournament games along with a buy-in. The bracket that comes closest to being right wins. If you also consider the bracket pools where only pride is at stake, the number of participants is much greater. Despite all this attention, most do not give themselves the best chance to win because they are focused on the wrong question.

The Right Question

Mistake #3 in Dr. John Elders Top 10 Data Science Mistakes is to ask the wrong question. A cornerstone of any successful analytics project starts with having the right project goal; that is, to aim at the right target. If youre like most people, when you fill out your bracket, you ask yourself, What do I think is most likely to happen? This is the wrong question to ask if you are competing in a pool because the objective is to win money, NOT to make the most correct bracket. The correct question to ask is: What bracket gives me the best chance to win $? (This requires studying the payout formula. I used ESPN standard scoring (320 possible points per round) with all pool money given to the winner. (10 points are awarded for each correct win in the round of 64, 20 in the round of 32, and so forth, doubling until 320 are awarded for a correct championship call.))

While these questions seem similar, the brackets they produce will be significantly different.

If you ignore your opponents and pick the teams with the best chance to win games you will reduce your chance of winning money. Even the strongest team is unlikely to win it all, and even if they do, plenty of your opponents likely picked them as well. The best way to optimize your chances of making money is to choose a champion team with a good chance to win who is unpopular with your opponents.

Knowing how other people in your pool are filling out their brackets is crucial, because it helps you identify teams that are less likely to be picked. One way to see how others are filling out their brackets is via ESPNs Who Picked Whom page (Figure 1). It summarizes how often each team is picked to advance in each round across all ESPN brackets and is a great first step towards identifying overlooked teams.

Figure 1. ESPNs Who Picked Whom Tournament Challenge page

For a team to be overlooked, their perceived chance to win must be lower than their actual chance to win. The Who Picked Whom page provides an estimate of perceived chance to win, but to find undervalued teams we also need estimates for actual chance to win. This can range from a complex prediction model to your own gut feeling. Two sources I trust are 538s March Madness predictions and Vegas future betting odds. 538s predictions are based on a combination of computer rankings and has predicted performance well in past tournaments. There is also reason to pay attention to Vegas odds, because if they were too far off, the sportsbooks would lose money.

However, both sources have their flaws. 538 is based on computer ratings, so while they avoid human bias, they miss out on expert intuition. Most Vegas sportsbooks likely use both computer ratings and expert intuition to create their betting odds, but they are strongly motivated to have equal betting on all sides, so they are significantly affected by human perception. For example, if everyone was betting on Duke to win the NCAA tournament, they would increase Dukes betting odds so that more people would bet on other teams to avoid large losses. When calculating win probabilities for this article, I chose to average 538 and Vegas predictions to obtain a balance I was comfortable with.

Lets look at last year. Figure 2 compares a teams perceived chance to win (based on ESPNs Who Picked Whom) to their actual chance to win (based on 538-Vegas averaged predictions) for the leading 2019 NCAA Tournament teams. (Probabilities for all 64 teams in the tournament appear in Table 6 in the Appendix.)

Figure 2. Actual versus perceived chance to win March Madness for 8 top teams

As shown in Figure 2, participants over-picked Duke and North Carolina as champions and under-picked Gonzaga and Virginia. Many factors contributed to these selections; for example, most predictive models, avid sports fans, and bettors agreed that Duke was the best team last year. If you were the picking the bracket most likely to occur, then selecting Duke as champion was the natural pick. But ignoring selections made by others in your pool wont help you win your pool.

While this graph is interesting, how can we turn it into concrete takeaways? Gonzaga and Virginia look like good picks, but what about the rest of the teams hidden in that bottom left corner? Does it ever make sense to pick teams like Texas Tech, who had a 2.6% chance to win it all, and only 0.9% of brackets picking them? How much does picking an overvalued favorite like Duke hurt your chances of winning your pool?

To answer these questions, I simulated many bracket pools and found that the teams in Gonzagas and Virginias spots are usually the best picksthe most undervalued of the top four to five favorites. However, as the size of your bracket pool increases, overlooked lower seeds like third-seeded Texas Tech or fourth-seeded Virginia Tech become more attractive. The logic for this is simple: the chance that one of these teams wins it all is small, but if they do, then you probably win your pool regardless of the number of participants, because its likely no one else picked them.

Simulations Methodology

To simulate bracket pools, I first had to simulate brackets. I used an average of the Vegas and 538 predictions to run many simulations of the actual events of March Madness. As discussed above, this method isnt perfect but its a good approximation. Next, I used the Who Picked Whom page to simulate many human-created brackets. For each human bracket, I calculated the chance it would win a pool of size by first finding its percentile ranking among all human brackets assuming one of the 538-Vegas simulated brackets were the real events. This percentile is basically the chance it is better than a random bracket. I raised the percentile to the power, and then repeated for all simulated 538-Vegas brackets, averaging the results to get a single win probability per bracket.

For example, lets say for one 538-Vegas simulation, my bracket is in the 90th percentile of all human brackets, and there are nine other people in my pool. The chance I win the pool would be. If we assumed a different simulation, then my bracket might only be in the 20th percentile, which would make my win probability . By averaging these probabilities for all 538-Vegas simulations we can calculate an estimate of a brackets win probability in a pool of size , assuming we trust our input sources.

Results

I used this methodology to simulate bracket pools with 10, 20, 50, 100, and 1000 participants. The detailed results of the simulations are shown in Tables 1-6 in the Appendix. Virginia and Gonzaga were the best champion picks when the pool had 50 or fewer participants. Yet, interestingly, Texas Tech and Purdue (3-seeds) and Virginia Tech (4-seed) were as good or better champion picks when the pool had 100 or more participants.

General takeaways from the simulations:

Additional Thoughts

We have assumed that your local pool makes their selections just like the rest of America, which probably isnt true. If you live close to a team thats in the tournament, then that team will likely be over-picked. For example, I live in Charlottesville (home of the University of Virginia), and Virginia has been picked as the champion in roughly 40% of brackets in my pools over the past couple of years. If you live close to a team with a high seed, one strategy is to start with ESPNs Who Picked Whom odds, and then boost the odds of the popular local team and correspondingly drop the odds for all other teams. Another strategy Ive used is to ask people in my pool who they are picking. It is mutually beneficial, since Id be less likely to pick whoever they are picking.

As a parting thought, I want to describe a scenario from the 2019 NCAA tournament some of you may be familiar with. Auburn, a five seed, was winning by two points in the waning moments of the game, when they inexplicably fouled the other team in the act of shooting a three-point shot with one second to go. The opposing player, a 78% free throw shooter, stepped to the line and missed two out of three shots, allowing Auburn to advance. This isnt an alternate reality; this is how Auburn won their first-round game against 12-seeded New Mexico State. They proceeded to beat powerhouses Kansas, North Carolina, and Kentucky on their way to the Final Four, where they faced the exact same situation against Virginia. Virginias Kyle Guy made all his three free throws, and Virginia went on to win the championship.

I add this to highlight an important qualifier of this analysisits impossible to accurately predict March Madness. Were the people who picked Auburn to go to the Final Four geniuses? Of course not. Had Terrell Brown of New Mexico State made his free throws, they would have looked silly. There is no perfect model that can predict the future, and those who do well in the pools are not basketball gurus, they are just lucky. Implementing the strategies talked about here wont guarantee a victory; they just reduce the amount of luck you need to win. And even with the best modelsyoull still need a lot of luck. It is March Madness, after all.

Appendix: Detailed Analyses by Bracket Sizes

At baseline (randomly), a bracket in a ten-person pool has a 10% chance to win. Table 1 shows how that chance changes based on the round selected for a given team to lose. For example, brackets that had Virginia losing in the Round of 64 won a ten-person pool 4.2% of the time, while brackets that picked them to win it all won 15.1% of the time. As a reminder, these simulations were done with only pre-tournament informationthey had no data indicating that Virginia was the eventual champion, of course.

Table 1 Probability that a bracket wins a ten-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

In ten-person pools, the best performing brackets were those that picked Virginia or Gonzaga as the champion, winning 15% of the time. Notably, early round picks did not have a big influence on the chance of winning the pool, the exception being brackets that had a one or two seed losing in the first round. Brackets that had a three seed or lower as champion performed very poorly, but having lower seeds making the Final Four did not have a significant impact on chance of winning.

Table 2 shows the same information for bracket pools with 20 people. The baseline chance is now 5%, and again the best performing brackets are those that picked Virginia or Gonzaga to win. Similarly, picks in the first few rounds do not have much influence. Michigan State has now risen to the third best Champion pick, and interestingly Purdue is the third best runner-up pick.

Table 2 Probability that a bracket wins a 20-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool size increases to 50, as shown in Table 3, picking the overvalued favorites (Duke and North Carolina) as champions significantly lowers your baseline chances (2%). The slightly undervalued two and three seeds now raise your baseline chances when selected as champions, but Virginia and Gonzaga remain the best picks.

Table 3 Probability that a bracket wins a 50-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

With the bracket pool size at 100 (Table 4), Virginia and Gonzaga are joined by undervalued three-seeds Texas Tech and Purdue. Picking any of these four raises your baseline chances from 1% to close to 2%. Picking Duke or North Carolina again hurts your chances.

Table 4 Probability that a bracket wins a 100-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

When the bracket pool grows to 1000 people (Table 5), there is a complete changing of the guard. Virginia Tech is now the optimal champion pick, raising your baseline chance of winning your pool from 0.1% to 0.4%, followed by the three-seeds and sixth-seeded Iowa State are the best champion picks.

Table 5 Probability that a bracket wins a 1000-person bracket pool given that it had a given team (row) making it to a given round (column) and no further

For Reference, Table 6 shows the actual chance to win versus the chance of being picked to win for all teams seeded seventh or better. These chances are derived from the ESPN Who Picked Whom page and the 538-Vegas predictions. The data for the top eight teams in Table 6 is plotted in Figure 2. Notably, Duke and North Carolina are overvalued, while the rest are all at least slightly undervalued.

The teams in bold in Table 6 are examples of teams that are good champion picks in larger pools. They all have a high ratio of actual chance to win to chance of being picked to win, but a low overall actual chance to win.

Table 6 Actual odds to win Championship vs Chance Team is Picked to Win Championship.

Undervalued teams in green; over-valued in red.

About the Author

Robert Robison is an experienced engineer and data analyst who loves to challenge assumptions and think outside the box. He enjoys learning new skills and techniques to reveal value in data. Robert earned a BS in Aerospace Engineering from the University of Virginia, and is completing an MS in Analytics through Georgia Tech.

In his free time, Robert enjoys playing volleyball and basketball, watching basketball and football, reading, hiking, and doing anything with his wife, Lauren.

Continued here:

How to Pick a Winning March Madness Bracket - Machine Learning Times - machine learning & data science news - The Predictive Analytics Times

Written by admin |

February 22nd, 2020 at 8:44 pm

Posted in Machine Learning

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning – HPCwire

Posted: at 8:44 pm


SAN JOSE, Calif., Feb. 21, 2020 Recently, the international evaluation agency Standard Performance Evaluation Corporation (SPEC) has finalized the election of new Open System Steering Committee (OSSC) executive members, which include Inspur, Intel, AMD, IBM, Oracle and other three companies.

It is worth noting that Inspur, a re-elected OSSC member, was also re-elected as the chair of the SPEC Machine Learning (SPEC ML) working group. The development plan of ML test benchmark proposed by Inspur has been approved by members which aims to provide users with standard on evaluating machine learning computing performance.

SPEC is a global and authoritative third-party application performance testing organization established in 1988, which aims to establish and maintain a series of performance, function, and energy consumption benchmarks, and provides important reference standards for users to evaluate the performance and energy efficiency of computing systems. The organization consists of 138 well-known technology companies, universities and research institutions in the industry such as Intel, Oracle, NVIDIA, Apple, Microsoft, Inspur, Berkeley, Lawrence Berkeley National Laboratory, etc., and its test standard has become an important indicator for many users to evaluate overall computing performance.

The OSSC executive committee is the permanent body of the SPEC OSG (short for Open System Group, the earliest and largest committee established by SPEC) and is responsible for supervising and reviewing the daily work of major technical groups of OSG, major issues, additions and deletions of members, development direction of research and decision of testing standards, etc. Meanwhile, OSSC executive committee uniformly manages the development and maintenance of SPEC CPU, SPEC Power, SPEC Java, SPEC Virt and other benchmarks.

Machine Learning is an important direction in AI development. Different computing accelerator technologies such as GPU, FPGA, ASIC, and different AI frameworks such as TensorFlow and Pytorch provide customers with a rich marketplace of options. However, the next important thing for the customer to consider is how to evaluate the computing efficiency of various AI computing platforms. Both enterprises and research institutions require a set of benchmarks and methods to effectively measure performance to find the right solution for their needs.

In the past year, Inspur has done much to advance the SPEC ML standard specific component development, contributing test models, architectures, use cases, methods and so on, which have been duly acknowledged by SPEC organization and its members.

Joe Qiao, General Manager of Inspur Solution and Evaluation Department, believes that SPEC ML can provide an objective comparison standard for AI / ML applications, which will help users choose a computing system that best meet their application needs. Meanwhile, it also provides a unified measurement standard for manufacturers to improve their technologies and solution capabilities, advancing the development of the AI industry.

About Inspur

Inspur is a leading provider of data center infrastructure, cloud computing, and AI solutions, ranking among the worlds top 3 server manufacturers. Through engineering and innovation, Inspur delivers cutting-edge computing hardware design and extensive product offerings to address important technology arenas like open computing, cloud data center, AI and deep learning. Performance-optimized and purpose-built, our world-class solutions empower customers to tackle specific workloads and real-world challenges. To learn more, please go towww.inspursystems.com.

Source: Inspur

View post:

Inspur Re-Elected as Member of SPEC OSSC and Chair of SPEC Machine Learning - HPCwire

Written by admin |

February 22nd, 2020 at 8:44 pm

Posted in Machine Learning

Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 – The Register

Posted: at 8:44 pm


Microsoft has announced a new application, Dynamics 365 Project Operations, as well as additional AI-driven features for its Dynamics 365 range.

If you are averse to buzzwords, look away now. Microsoft Business Applications President James Phillips announced the new features in a post which promises AI-driven insights, a holistic 360-degree view of a customer, personalized customer experiences across every touchpoint, and real-time actionable insights.

Dynamics 365 is Microsofts cloud-based suite of business applications covering sales, marketing, customer service, field service, human resources, finance, supply chain management and more. There are even mixed reality offerings for product visualisation and remote assistance.

Dynamics is a growing business for Microsoft, thanks in part to integration with Office 365, even though some of the applications are quirky and awkward to use in places. Licensing is complex too and can be expensive.

Keeping up with what is new is a challenge. If you have a few hours to spare, you could read the 546-page 2019 Release Wave 2 [PDF] document, for features which have mostly been delivered, or the 405-page 2020 Release Wave 1 [PDF], about what is coming from April to September this year.

Many of the new features are small tweaks, but the company is also putting its energy into connecting data, both from internal business sources and from third parties, to drive AI analytics.

The updated Dynamics 365 Customer Insights includes data sources such as demographics and interests, firmographics, market trends, and product and service usage data, says Phillips. AI is also used in new forecasting features in Dynamics 365 Sales and in Dynamics 365 Finance Insights, coming in preview in May.

Dynamics 365 Project Operations ... Click to enlarge

The company is also introducing a new application, Dynamics 365 Business Operations, with general availability promised for October 1 2020. This looks like a business-oriented take on project management, with the ability to generate quotes, track progress, allocate resources, and generate invoices.

Microsoft already offers project management through its Project products, though this is part of Office rather than Dynamics. What can you do with Project Operations that you could not do before with a combination of Project and Dynamics 365?

There is not a lot of detail in the overview, but rest assured that it has AI-powered business insights and seamless interoperability with Microsoft Teams, so it must be great, right? More will no doubt be revealed at the May Business Applications Summit in Dallas, Texas.

Sponsored: Detecting cyber attacks as a small to medium business

More here:

Buzzwords ahoy as Microsoft tears the wraps off machine-learning enhancements, new application for Dynamics 365 - The Register

Written by admin |

February 22nd, 2020 at 8:44 pm

Posted in Machine Learning


Page 1,231«..1020..1,2301,2311,2321,233..1,2401,250..»



matomo tracker