Page 3«..2345..1020..»

Archive for the ‘Machine Learning’ Category

Activating vacation mode: Utilizing AI and machine learning in your … – TravelDailyNews International

Posted: April 25, 2023 at 12:10 am


without comments

Say the words dream vacation and everyone will picture something different. This brings a particular challenge to the modern travel marketer especially in a world of personalization, when all []

Say the words dream vacation and everyone will picture something different. This brings a particular challenge to the modern travel marketer especially in a world of personalization, when all travelers are looking for their own unique experiences. Fortunately, artificial intelligence (AI) provides a solution that allows travel marketers to draw upon a variety of sources when researching the best ways to connect with potential audiences.

By utilizing and combining data from user-generated content, transaction history and other online communications, AI and machine-learning (ML) solutions can help to give marketers a customer-centric approach, while successfully accounting for the vast diversity amongst their consumer base.

AI creates significant value for travel brands, which is why 48% of business executives are likely to invest in AI and automation in customer interactions over the next two years, according to Deloitte. Using AI and a data-driven travel marketing strategy, you can predict behaviors and proactively market to your ideal customers. There are as many AI solutions in the market as there are questions that require data, so choosing the right one is important.

For example, a limited-memory AI solution can skim a review site, such as TripAdvisor, to determine the most popular destinations around a major travel season, like summertime. Or, a chatbot can speak directly with visitors to your site, and aggregate their data to give brands an idea on what prospective consumers are looking for. Other solutions offer predictive segmentation, which can separate consumers based on their probability of taking action, categorize your leads and share personalized outreach on their primary channels. Delivering personalized recommendations are a major end goal for AI solutions in the travel industry. For example, Booking.com utilizes a consumers search history to determine whether they are traveling for business or leisure and provide recommendations accordingly.

A major boon of todays AI and machine-learning solutions are their ability to monitor and inform users of ongoing behavioral trends. For example, who could have predicted the popularity of hotel day passes for remote workers, as little as three years ago? Or the growing consumer desire for sustainable toiletries? Trends change every year or, more accurately, every waking hour so, having a tool that can stay ahead of the next biggest thing is essential.

In an industry where every element of the customers experience travel costs, hotels, activities is meticulously planned, delivering personalized experiences is critical to maintaining a customers interest. Consumers want personalization. As Google reports, 90% of leading marketers indicate that personalization significantly contributes to business profitability.

Particularly in the travel field, where there are as many consumer preferences as there are destinations on a map, personalization is essential in order to gain their attention. AI capabilities can solve common traveler frustrations, further enhancing the consumer experience. Natural language processors can skim through review sites, gathering the generalized sentiment from prior reviews and determining common complaints that may arise. Through these analyses from a range of sources from across a consumers journey, you can catch problems before they start.

For travel marketers already dealing with a diverse audience, and with a need for personalization to effectively stand out amongst the competition, AI and ML solutions can effectively help you plan and execute personalized outreach, foster brand loyalty and optimize the consumer experience. With AI working behind the scenes, your customers can look forward to fun in the sun, on the slopes, or wherever their destination may be.

Janine Pollack is the Executive Director, Growth & Content, and self-appointed Storyteller in Chief at MNI Targeted Media. She leads the brands commitment to generating content that informs and inspires. Her scope of work includes strategy and development for Fortune Knowledge Groups thought leadership programs and launching Fortunes The Most Powerful Woman podcast. She is proud to have partnered with The Hebrew University on the inaugural Nexus: Israel program, featuring worldwide luminaries. Janine has also written lifetime achievements for Sports Business Journal. She earned her masters from the Northwestern University Medill School of Journalism and B.A. from The American University in Washington D.C.

Continue reading here:

Activating vacation mode: Utilizing AI and machine learning in your ... - TravelDailyNews International

Written by admin

April 25th, 2023 at 12:10 am

Posted in Machine Learning

What is one downside to deep learning? – Rebellion Research

Posted: at 12:10 am


without comments

What is one downside to deep learning?

Deep learning is a subset of machine learning that involves training artificial neural networks to recognize patterns in data. While deep learning has shown remarkable success in recent years, enabling breakthroughs in fields such as computer vision, natural language processing, and robotics, it is not without its flaws. One of the major challenges facing deep learning is its slow adaptability to changing environments and new data.

Deep learning algorithms typically train on large datasets. To recognize patterns in the data. These patterns can become used to make predictions or classify new data. That the model has not seen before. However, the performance of deep learning models usually deteriorate sover time. As the data trained on becomes outdated. Or no longer reflects the real-world conditions. Known as the problem of concept drift. Where the statistical properties of the data change over time. As a result, leading to degraded performance of the model.

There are several techniques that have become proposed to address the problem of concept drift in deep learning. One approach uses a continuous learning framework. Where the model becomes updated over time with new data to prevent the accumulation of errors due to concept drift. Another approach uses transfer learning. Where a pre-trained model fine-tuned on new data to adapt to the changing environment.

Despite these approaches, deep learning models still struggle with slow adaptability to new data and changing environments. Due in part to the fact that deep learning models highly parameterized and require large amounts of data to learn complex representations of the input data. As a result, updating the model with new data can be computationally expensive and time-consuming, making it difficult to adapt quickly to changing conditions.

In conclusion, the slow adaptability of deep learning models to changing environments. And new data becomes a major flaw. Moreover, one that needs to be addressed to enable their wider adoption in real-world applications. While techniques such as continuous learning and transfer learning show promise. More research becomes needed to develop more efficient and effective approaches to address this challenge. By addressing this flaw, deep learning can continue to revolutionize fields ranging from healthcare to finance to transportation, enabling new breakthroughs and transforming our world.

What is an example of a concept drift

Deep Learning 101: Introduction [Pros, Cons & Uses] (v7labs.com)

Advantages of Deep Learning | disadvantages of Deep Learning (rfwireless-world.com)

Pros and Cons of Deep Learning Pythonista Planet

Advantages and Disadvantages of Deep Learning | Analytics Steps

4 Disadvantages of Neural Networks & Deep Learning | Built In

What is one downside to deep learning?

Continue reading here:

What is one downside to deep learning? - Rebellion Research

Written by admin

April 25th, 2023 at 12:10 am

Posted in Machine Learning

3 exciting jobs in artificial intelligence and machine learning this week – VentureBeat

Posted: at 12:10 am


without comments

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Adoption of artificial intelligence (AI) has more than doubled globally since 2017, according to a state of AI report from McKinsey from late 2022, and the level of investment has increased as well.

Five years ago, 40% of organizations using AI said that over 5% of their digital budgets was spent on AI. Now, 52% report that level of investment, and things look set to improve further with 63% of respondents saying they expect investment to increase over the next three years.

Rising adoption is leading to job opportunities both within the sector, and as a result of AI-driven automation. According to the International Federation of Robotics, there has been a 14% increase year-over-year in the number of automated jobs, with junior workers the most affected. Figures from the World Economic Forum predict that the rapid growth of AI will create another 95 million high-paying jobs by 2025.

Since the beginning of 2023, awareness and understanding of AI and machine learning (ML) technologies have crossed the rubicon. No longer the province of the tech-savvy, Open AIs ChatGPT has brought about mass adoption of generative AI in particular.

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

ChatGPT is just one of an increasing number of generative AI tools, including Bing Chat and Google Bard. DeepMinds Alpha Code writes computer programs at a competitive level; Jasper is an AI copywriter, and DALL-E, MidJourney and Stable Diffusion can all create realistic images and art from a description you provide

With the progression in AI capabilities, this is a sector that is in growth, but also one where you can build a career with plenty of prospects; the Bureau of Labor Statistics predicts a 31.4% increase in jobs for data scientists and mathematical science professionals which are crucial to AI by 2030.

Plus, AI and ML jobs tend to pay well. While salary will differ from role to role, the top paying states for AI, according to talent.com, are New York ($200,000), Maryland ($171,300), and Virginia ($170,000).

Want to get ahead of the crowd? The VentureBeat Job Board offers a first port of call for thousands of tech jobs discover three interesting AI/ML roles below.

Roblox is building the tools and platform that empowers its community to bring any experience that they can imagine to life. As a Technical Director on the Game Engine team, you will work on state-of-the-art real time AI and machine learning projects and will shape the vision of low compute cost, real-time machine learning at Roblox, taking leads on designing core components of Engine team ML operations. A strong understanding of deep learning frameworks including PyTorch and Tensorflow, as well as machine learning feature development workflows from training to deployment is required. Youll be experienced in shipping ML features in production environments and have experience optimizing and deploying models to mobile devices. Knowledge of state-of-the-art deep learning network architectures and primitives as well as 10 or more years in two or more languages (Python, C++, Rust, Lua, Go or JavaScript) is required. See the full list of requirements for this position.

The Data Automation team at Bloomberg develops machine learning models and infrastructure to extract key information from all kinds of financial documents. As Senior Machine Learning Engineer Data Automation, you will own the research on ground-breaking ML/NLP techniques and design efforts for the most efficient and practical application of those techniques to sophisticated business problems. You will use our automated ML suite, equipped with annotation platforms, for collecting training data and hyper-tuning models, as well as deploy your application on our scalable ETL infrastructure. Youll need to have four or more years of experience with an object-oriented programming language such as Java or Python, and have subject matter expertise in one or more of the following: artificial intelligence (AI), natural language processing (NLP), machine learning (ML), statistical models, and text analytics on large data sets. Interested? See the full job description here.

Meta is seeking a Software Engineer ML Systems (Technical Leadership) to join its research and development teams. You will drive the organizations goal towards relevant machine learning techniques, and build and optimize intelligent systems that improve Metas products and experiences. You will also assist in goal setting related to project impact, AI system design, and ML excellence, develop custom/novel architectures, define use cases, and develop methodology and benchmarks to evaluate different approaches. A Bachelors degree in computer science, computer engineering, a relevant technical field, or equivalent practical experience is required, as is vast experience communicating and working across functions to drive solutions. Experience developing AI algorithms or AI-system infrastructure in C/C++ or Python is also a requirement. See all the details for this job.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

See the rest here:

3 exciting jobs in artificial intelligence and machine learning this week - VentureBeat

Written by admin

April 25th, 2023 at 12:10 am

Posted in Machine Learning

How machine learning is personalizing the online casino … – mtltimes.ca

Posted: at 12:10 am


without comments

The industry of online casino is truly profitable and lucrative, while it brings huge revenue to different countries all over the world. Since the technological growth and prevalent use of artificial intelligence, the assumptions are that casino industry is going to grow even further. Many online casinos are already using machine learning in database management. In comparison with other industries, users of online casinos are not predictable. That is the reason why companies need to collect and analyze data in order to understand their customers. In the text below, we will take a look at some examples of how machine learning is affecting online casinos as well as how it is personalizing the Online Casino Experience.

Collecting and Analyzing Data

The gambling industry has long analyzed consumer behavior to better understand the preferences of its players. Before the use of AI, casinos took advantage of club cards and loyalty programs however in this day and age collecting data from consumers is easy.

The data shows how players interact on a casino site as well as what their preferences are. By collecting data gaming companies want to understand why their consumers choose specific titles and why they engage in a specific way to then use that data to further improve the casino experience.

To analyze data, complex systems are necessary, but machine learning simplifies the process by collecting, analyzing, and presenting information that gaming companies can interpret. Leading online casinos are utilizing this technology to enhance user experience and entice more players.

By tracking consumer behavior they can create a unique online casino experience for each player. In other words, players can access sites that are tailored to their preferences. Once a player logs in they are presented with their favorite games based on their previous behavior on the site.

Personalized Bonuses and Promotions

Marketing plays a crucial role in the success of any online business, and for gambling sites, welcome bonuses are a major attraction for new customers. However, most operators offer the same promotion to everyone, as customizing offers for each user would be time-consuming and resource-intensive. Thankfully, artificial intelligence can help solve this problem.

Machine learning algorithms can analyze user data and behaviour to determine the best offer for each player, providing a more personalized experience that meets their individual needs and interests. They can offer the best Casino Rewards in Canada to all their players.

This approach allows online casinos to offer the same level of customer satisfaction to all players, not just VIPs, by tailoring promotions and rewards to each user. By enhancing the customer experience in this way, casinos can increase player loyalty, ultimately leading to greater success in the highly competitive online gaming industry.

Detecting and Preventing Fraud

Online casinos are a thriving business, and with that comes the increased risk of fraud. Traditional measures, such as employing security personnel and installing cameras, are effective for brick-and-mortar casinos, but not for online ones. Machine learning is a valuable tool in identifying cheating behaviour, but how can it prevent cheaters from accessing gaming sites?

The development of cyber-security is the answer. Soon, advancements in this field will enable casinos to better protect themselves from unscrupulous players, ensuring a safe and enjoyable experience for all.

Faster Withdrawals

The advent of machine learning is transforming the online casino industry by revolutionizing the withdrawal process and making it faster and more efficient for players. Traditional withdrawal processes required various checks and verifications, causing the process to take several days.

By analyzing the transaction history and the behavior of players machine learning can detect suspicious activity thus allowing online casinos to identify and prevent fraudulent withdrawals in time. Besides the safety benefits of machine learning it also helps make all withdrawals faster.

Additionally, it can help make the verification process easier by automatically verifying if

documents are authentic and detecting anything suspicious. Thus online casinos can offer their players a more reliable, faster, and efficient withdrawal process, which in turn leads to greater player satisfaction and new players.

Final Thoughts

To conclude, by personalizing the player experience, machine learning is revolutionizing the online casino industry. By collecting and examining data on the behaviour, preferences, and transaction history of players, machine learning can grant insight into information that is needed to be able to create personalized games, promotions, incentives, and bonuses.

By offering players a personalized online casino experience, casinos not only increase the satisfaction of players but also encourage the loyalty and retention of planets which is significant in a market as competitive as the online casino market.

Other than personalized player experience, machine learning also helps operators to identify and prevent fraudulent activities, ensuring a safer and more secure online gaming environment for players. Furthermore, the automation of different processes, such as verification of a document or detection of a fraud, speeds up transaction time and reduces processing costs. That results in a more efficient and more effective online casino operations.

Thus, introducing machine learning to this industry is a significant step towards a more personalized as well as rewarding player experience. Since technology continues to expand, the expectancy is that machine learning will become even more common, enabling online casinos to offer increasingly engaging and personalized experiences to their players.

Other articles frommtltimes.catotimes.caotttimes.ca

Related

Read the rest here:

How machine learning is personalizing the online casino ... - mtltimes.ca

Written by admin

April 25th, 2023 at 12:10 am

Posted in Machine Learning

Machine Learning in Pharmaceutical Industry Market Is Expected to … – GlobeNewswire

Posted: at 12:09 am


without comments

Portland, OR , April 20, 2023 (GLOBE NEWSWIRE) -- According to the report published by Allied Market Research, the global machine learning in pharmaceutical industry market garnered $1.2 billion in 2021, and is estimated to generate $26.2 billion by 2031, manifesting a CAGR of 37.9% from 2022 to 2031. The report provides an extensive analysis of changing market dynamics, major segments, value chain, competitive scenario, and regional landscape. This research offers a valuable guidance to leading players, investors, shareholders, and startups in devising strategies for the sustainable growth and gaining competitive edge in the market.

Request Sample Report: https://www.alliedmarketresearch.com/request-sample/74979

Report coverage & details

Covid-19 Scenario:

The research provides detailed segmentation of the global machine learning in pharmaceutical industry market based on component, enterprise size, deployment, and region. The report discusses segments and their sub-segments in detail with the help of tables and figures. Market players and investors can strategize according to the highest revenue-generating and fastest-growing segments mentioned in the report.

Procure Complete Report (280 Pages PDF with Insights, Charts, Tables, and Figures) at:

https://www.alliedmarketresearch.com/global-machine-learning-in-pharmaceutical-industry-market/purchase-options

Based on component, the solution segment held the highest share in 2021, accounting for more than two-thirds of the global machine learning in pharmaceutical industry market and is expected to continue its leadership status during the forecast period. However, the services segment is expected to register the highest CAGR of 39.5% from 2022 to 2031.

On the basis of enterprise size, the large enterprises segment accounted for the highest share in 2021, contributing to around three-fourths of the global machine learning in pharmaceutical industry market, and is expected to maintain its lead in terms of revenue during the forecast period. Moreover, the SMEs segment is expected to manifest the highest CAGR of 40.1% from 2022 to 2031.

Based on deployment, the cloud segment accounted for the highest share in 2021, holding more than two-thirds of the global machine learning in pharmaceutical industry market, and is expected to continue its leadership status during the forecast period. This segment is estimated to grow at the highest CAGR of 40.0% during the forecast period. The report also discusses on-premise segment.

Based on region, North America held the largest share in 2021, contributing to nearly half of the global machine learning in pharmaceutical industry market share, and is projected to maintain its dominant share in terms of revenue in 2031. In addition, the Asia-Pacific region is expected to manifest the fastest CAGR of 42.4% during the forecast period. The report also analyzes the markets in Europe and LAMEA regions.

Leading market players of the global machine learning in pharmaceutical industry market analyzed in the research include BioSymetrics Inc., Deep Genomics, Atomwise Inc., NVIDIA Corporation, International Business Machines Corporation, Microsoft Corporation, IBM, cyclica inc., Cloud Pharmaceuticals, Inc., and Alphabet Inc.

Enquiry Before Buying: https://www.alliedmarketresearch.com/purchase-enquiry/74979

The report provides a detailed analysis of these key players of the global machine learning in pharmaceutical industry market. These players have adopted different strategies such as new product launches, collaborations, expansion, joint ventures, agreements, and others to increase their market share and maintain dominant shares in different regions. The report is valuable in highlighting business performance, operating segments, product portfolio, and strategic moves of market players to showcase the competitive scenario.

About Us

Allied Market Research (AMR) is a full-service market research and business-consulting wing of Allied Analytics LLP based in Portland, Oregon. Allied Market Research provides global enterprises as well as medium and small businesses with unmatched quality of "Market Research Reports" and "Business Intelligence Solutions." AMR has a targeted view to provide business insights and consulting to assist its clients to make strategic business decisions and achieve sustainable growth in their respective market domain.

We are in professional corporate relations with various companies and this helps us in digging out market data that helps us generate accurate research data tables and confirms utmost accuracy in our market forecasting. Allied Market Research CEO Pawan Kumar is instrumental in inspiring and encouraging everyone associated with the company to maintain high quality of data and help clients in every way possible to achieve success. Each and every data presented in the reports published by us is extracted through primary interviews with top officials from leading companies of domain concerned. Our secondary data procurement methodology includes deep online and offline research and discussion with knowledgeable professionals and analysts in the industry.

Read this article:

Machine Learning in Pharmaceutical Industry Market Is Expected to ... - GlobeNewswire

Written by admin

April 25th, 2023 at 12:09 am

Posted in Machine Learning

AI to Z: all the terms you need to know to keep up in the AI hype age – The Conversation

Posted: at 12:09 am


without comments

Artificial intelligence (AI) is becoming ever more prevalent in our lives. Its no longer confined to certain industries or research institutions; AI is now for everyone.

Its hard to dodge the deluge of AI content being produced, and harder yet to make sense of the many terms being thrown around. But we cant have conversations about AI without understanding the concepts behind it.

Weve compiled a glossary of terms we think everyone should know, if they want to keep up.

An algorithm is a set of instructions given to a computer to solve a problem or to perform calculations that transform data into useful information.

The alignment problem refers to the discrepancy between our intended objectives for an AI system and the output it produces. A misaligned system can be advanced in performance, yet behave in a way thats against human values. We saw an example of this in 2015 when an image-recognition algorithm used by Google Photos was found auto-tagging pictures of black people as gorillas.

Artificial general intelligence refers to a hypothetical point in the future where AI is expected to match (or surpass) the cognitive capabilities of humans. Most AI experts agree this will happen, but disagree on specific details such as when it will happen, and whether or not it will result in AI systems that are fully autonomous.

Artificial neural networks are computer algorithms used within a branch of AI called deep learning. Theyre made up of layers of interconnected nodes in a way that mimics the neural circuitry of the human brain.

Big data refers to datasets that are much more massive and complex than traditional data. These datasets, which greatly exceed the storage capacity of household computers, have helped current AI models perform with high levels of accuracy.

Big data can be characterised by four Vs: volume refers to the overall amount of data, velocity refers to how quickly the data grow, veracity refers to how complex the data are, and variety refers to the different formats the data come in.

The Chinese Room thought experiment was first proposed by American philosopher John Searle in 1980. It argues a computer program, no matter how seemingly intelligent in its design, will never be conscious and will remain unable to truly understand its behaviour as a human does.

This concept often comes up in conversations about AI tools such as ChatGPT, which seem to exhibit the traits of a self-aware entity but are actually just presenting outputs based on predictions made by the underlying model.

Deep learning is a category within the machine-learning branch of AI. Deep-learning systems use advanced neural networks and can process large amounts of complex data to achieve higher accuracy.

These systems perform well on relatively complex tasks and can even exhibit human-like intelligent behaviour.

A diffusion model is an AI model that learns by adding random noise to a set of training data before removing it, and then assessing the differences. The objective is to learn about the underlying patterns or relationships in data that are not immediately obvious.

These models are designed to self-correct as they encounter new data and are therefore particularly useful in situations where there is uncertainty, or if the problem is very complex.

Explainable AI is an emerging, interdisciplinary field concerned with creating methods that will increase users trust in the processes of AI systems.

Due to the inherent complexity of certain AI models, their internal workings are often opaque, and we cant say with certainty why they produce the outputs they do. Explainable AI aims to make these black box systems more transparent.

These are AI systems that generate new content including text, image, audio and video content in response to prompts. Popular examples include ChatGPT, DALL-E 2 and Midjourney.

Data labelling is the process through which data points are categorised to help an AI model make sense of the data. This involves identifying data structures (such as image, text, audio or video) and adding labels (such as tags and classes) to the data.

Humans do the labelling before machine learning begins. The labelled data are split into distinct datasets for training, validation and testing.

The training set is fed to the system for learning. The validation set is used to verify whether the model is performing as expected and when parameter tuning and training can stop. The testing set is used to evaluate the finished models performance.

Large language models (LLM) are trained on massive quantities of unlabelled text. They analyse data, learn the patterns between words and can produce human-like responses. Some examples of AI systems that use large language models are OpenAIs GPT series and Googles BERT and LaMDA series.

Machine learning is a branch of AI that involves training AI systems to be able to analyse data, learn patterns and make predictions without specific human instruction.

While large language models are a specific type of AI model used for language-related tasks, natural language processing is the broader AI field that focuses on machines ability to learn, understand and produce human language.

Parameters are the settings used to tune machine-learning models. You can think of them as the programmed weights and biases a model uses when making a prediction or performing a task.

Since parameters determine how the model will process and analyse data, they also determine how it will perform. An example of a parameter is the number of neurons in a given layer of the neural network. Increasing the number of neurons will allow the neural network to tackle more complex tasks but the trade-off will be higher computation time and costs.

The responsible AI movement advocates for developing and deploying AI systems in a human-centred way.

One aspect of this is to embed AI systems with rules that will have them adhere to ethical principles. This would (ideally) prevent them from producing outputs that are biased, discriminatory or could otherwise lead to harmful outcomes.

Sentiment analysis is a technique in natural language processing used to identify and interpret the emotions behind a text. It captures implicit information such as, for example, the authors tone and the extent of positive or negative expression.

Supervised learning is a machine-learning approach in which labelled data are used to train an algorithm to make predictions. The algorithm learns to match the labelled input data to the correct output. After learning from a large number of examples, it can continue to make predictions when presented with new data.

Training data are the (usually labelled) data used to teach AI systems how to make predictions. The accuracy and representativeness of training data have a major impact on a models effectiveness.

A transformer is a type of deep-learning model used primarily in natural language processing tasks.

The transformer is designed to process sequential data, such as natural language text, and figure out how the different parts relate to one another. This can be compared to how a person reading a sentence pays attention to the order of the words to understand the meaning of the sentence as a whole.

One example is the generative pre-trained transformer (GPT), which the ChatGPT chatbot runs on. The GPT model uses a transformer to learn from a large corpus of unlabelled text.

The Turing test is a machine intelligence concept first introduced by computer scientist Alan Turing in 1950.

Its framed as a way to determine whether a computer can exhibit human intelligence. In the test, computer and human outputs are compared by a human evaluator. If the outputs are deemed indistinguishable, the computer has passed the test.

Googles LaMDA and OpenAIs ChatGPT have been reported to have passed the Turing test although critics say the results reveal the limitations of using the test to compare computer and human intelligence.

Unsupervised learning is a machine-learning approach in which algorithms are trained on unlabelled data. Without human intervention, the system explores patterns in the data, with the goal of discovering unidentified patterns that could be used for further analysis.

If so, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. With the latest scientific discoveries, thoughtful analysis on political issues and research-based life tips, each email is filled with articles that will inform you and often intrigue you.

Get our newsletters

Editor and General Manager

Find peace of mind, and the facts, with experts. Add evidence-based articles to your news digest. No uninformed commentariat. Just experts. 90,000 of them have written for us. They trust us. Give it a go.

Get our newsletter

If you found the article you just read to be insightful, youll be interested in our free daily newsletter. Its filled with the insights of academic experts, written so that everyone can understand whats going on in the world. Each newsletter has articles that will inform and intrigue you.

Subscribe now

CEO | Editor-in-Chief

It helps you go deeper into key political issues and also introduces you to the diversity of research coming out of the continent. It's not about breaking news. It's not about unfounded opinions. The Europe newsletter is evidence-based expertise from European scholars, presented by myself in France, and two of my colleagues in Spain and the UK.

Get our newsletter

Head of English section, France edition

Visit link:

AI to Z: all the terms you need to know to keep up in the AI hype age - The Conversation

Written by admin

April 25th, 2023 at 12:09 am

Posted in Machine Learning

Causal Bayesian machine learning to assess treatment effect … – Nature.com

Posted: at 12:09 am


without comments

This is a post hoc exploratory analysis of the COVID STEROID 2 trial7. It was conducted according to a statistical analysis plan, which was written after the pre-planned analyses of the trial were reported, but before any of the analyses reported in this manuscript were conducted (https://osf.io/2mdqn/). This manuscript was presented according to the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) checklist12, with Bayesian analyses reported according to the Reporting of Bayes Used in clinical STudies (ROBUST) guideline13.

HTE implies that some individuals respond differently, i.e., better or worse, than others who receive the same therapy due to differences between individuals. Most trials are designed to evaluate the average treatment effect, which is the summary of all individual effects in the trial sample (see supplementary appendix for additional technical details). Traditional HTE methods examine patient characteristics one at a time, looking to identify treatment effect differences according to individual variables. This approach is well known to be limited as it is underpowered (due to adjustment for multiple testing) and does not account for the fact that many characteristics under examination are correlated and may have synergistic effects. As a result, more complex relationships between variables that better define individuals, and thus may better inform understanding about the variations in treatment response, may be missed using conventional HTE approaches. Thus, identifying true and clinically meaningful HTE requires addressing these data and statistical modeling challenges. BART is inherently an attractive method for this task, as the algorithm automates the detection of nonlinear relationships and interactions hierarchically based on the strength of the relationships, thereby reducing researchers discretion when analyzing experimental data. This approach also avoids any model misspecification or bias inherent in traditional interaction test procedures. BART can also be deployed, as we do herein, within the counterfactual framework to study HTE, i.e., to estimate conditional average treatment effects given the set of covariates or potential effect modifiers11,14,15, and has shown superior performance to competing methods in extensive simulation studies16,17. These features make BART an appealing tool for trialists to explore HTE to inform future confirmatory HTE analyses in trials and hypothesis generation more broadly. Thus, this analysis used BART to evaluate the presence of multivariable HTE and estimate conditional average treatment effects among meaningful subgroups in the COVID STEROID 2 trial.

The COVID STEROID 2 trial7 was an investigator-initiated, international, parallel-group, stratified, blinded, randomized clinical trial conducted at 31 sites in 26 hospitals in Denmark, India, Sweden, and Switzerland between 27 August 2020 and 20 May 20217,18. The trial was approved by the regulatory authorities and ethics committees in all participating countries.

The trial enrolled 1000 adult patients hospitalized with COVID-19 and severe hypoxemia (10 L oxygen/min, use of non-invasive ventilation (NIV), continuous use of continuous positive airway pressure (cCPAP), or invasive mechanical ventilation (IMV)). Patients were primarily excluded due to previous use of systemic corticosteroids for COVID-19 for 5 or more days, unobtainable consent, and use of higher-dose corticosteroids for other indications than COVID-194,17. Patients were randomized 1:1 to dexamethasone 12mg/d or 6mg/d intravenously once daily for up to 10days. Additional details are provided in the primary protocol and trial report7,18.

The trial protocol was approved by the Danish Medicines Agency, the ethics committee of the Capital Region of Denmark, and institutionally at each trial site. The trial was overseen by the Collaboration for Research in Intensive Care and the George Institute for Global Health. A data and safety monitoring committee oversaw the safety of the trial participants and conducted 1 planned interim analysis. Informed consent was obtained from the patients or their legal surrogates according to national regulations.

We examined two outcomes: (1) DAWOLS at day 90 (i.e., the observed number of days without the use of IMV, circulatory support, and kidney replacement therapy without assigning dead patients the worst possible value), and (2) 90-day mortality. Binary mortality outcomes were used to match the primary trial analysis; time-to-event outcomes also generally tend to be less robust for ICU trials19. We selected DAWOLS at day 90 in lieu of the primary outcome of the trial (DAWOLS at day 28) and to align with other analyses of the trial which sought to examine outcomes in a longer term. Both outcomes were assessed in the complete intention-to-treat (ITT) population, which was 982 after the exclusion of patients without consent for the use of their data7. As the sample size is fixed, there was no formal sample size calculation for this study.

While BART is a data-driven approach that can scan for interdependent relationships among any number of factors, we only examined heterogeneity across a pre-selected set of factors deemed to be clinically relevant by the authors and members of the COVID STEROID 2 trial Management Committee. The pre-selected variables that were included in this analysis are listed below with the scale used in parentheses. Continuous covariates were standardized to have a mean of 0 and a standard deviation of 1 prior to analysis. Detailed variable definitions are available in the study protocol18.

participant age (continuous),

limitations in care (yes, no),

level of respiratory support (open system versus NIV/cCPAP versus IMV)

interleukin-6 (IL-6) receptor inhibitors (yes, no),

use of dexamethasone for up to 2days versus use for 3 to 4days prior to randomization,

participant weight (continuous),

diabetes mellitus (yes, no),

ischemic heart disease or heart failure (yes, no),

chronic obstructive pulmonary disease (yes, no), and,

immunosuppression within 3months prior to randomization (yes, no).

We evaluated HTE on the absolute scale (i.e., mean difference in days for the number of DAWOLS at day 90 and the risk difference for 90-day mortality). The analysis was separated into two stages14,20,21,22. In the first stage, conditional average treatment effects were estimated according to each participants covariates using BART models. The DAWOLS outcome was treated as a continuous variable and analyzed using standard BART, while the binary mortality outcome was analyzed using logit BART. In the second stage, a fit-the-fit approach was used, where the estimated conditional average treatment effects were used as dependent variables in models to identify covariate-defined subgroups differential treatment effects. This second stage used classification and regression trees models23, where the maximum depth was set to 3 as a post hoc decision to aid interpretability. As the fit-the-fit reflects estimates from the BART model, the resulting overall treatment effects (e.g., risk difference) vary slightly from the raw trial data.

BART models are often fit using a sum of 200 trees and specifying a base prior of 0.95 and a power prior of 2, which penalize substantial branch growth within each tree15. Although these default hyperparameters tend to work well in practice, it was possible they were not optimal for this data. Thus, the hyperparameters were evaluated using tenfold cross-validation, comparing predictive performance of the model under 27 pre-specified possibilities, namely every combination of power priors equal to 1, 2, or 3, base priors equal to 0.25, 0.5, or 0.95, and number of trees equal to 50, 200, or 400. The priors corresponding to the lowest cross-validation error were used in the final models. Each model used a Markov chain Monte Carlo procedure consisting of 4 chains that each had 100 burn-in iterations and a total length of 1100 iterations. Posterior convergence for each model was assessed using the diagnostic procedures described in Sparapani et al.24. Model diagnostics were good for all models. All parameters seemed to converge within the burn-in period and the z-scores for Gewekes convergence diagnostic25 were approximately standard normal. All BART models were fit using R statistical computing software v. 4.1.226 with the BART package v. 2.924, and all CART models were fit using the rpart package v. 4.1.1627.

The analysis was performed under the ITT paradigm; compliance issues were considered minimal. As in the primary analyses of the trial, the small amount of missing outcome data was ignored in the primary analyses. Sensitivity analyses were performed under best/worst- and worst/best-case imputation. For best/worst-case imputation, the entire estimation procedure was repeated after setting all missing mortality outcome data in the 12mg/d group to alive at 90days and all missing mortality outcome data in the 6mg/d group to dead at 90days. Then, all days with missing life support data were set to alive without life support for the 12mg/d group and the opposite for the 6mg/d group. Under worst/best-case imputation, the estimation procedure was repeated under the opposite conditions, e.g., setting all missing mortality outcome data in the 12mg/d group to dead at 90days and all missing mortality outcome data in the 6mg/d group to alive at 90days.

The resulting decision trees from each fit-the-fit analysis described above (one for the 90-day mortality outcome, and one for the 90-day DAWOLS outcome) were outputted (with continuous variables de-standardized, i.e., back-translated to the original scales). Likewise, the resulting decision trees for each outcome after best- and worst-case imputation were outputted for comparison with the complete records analyses. All statistical code is made available at https://github.com/harhay-lab/Covid-Steroid-HTE.

Visit link:

Causal Bayesian machine learning to assess treatment effect ... - Nature.com

Written by admin

April 25th, 2023 at 12:09 am

Posted in Machine Learning

How reinforcement learning with human feedback is unlocking the power of generative AI – VentureBeat

Posted: at 12:09 am


without comments

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

The race to build generative AI is revving up, marked by both the promise of these technologies capabilities and the concern about the dangers they could pose if left unchecked.

We are at the beginning of an exponential growth phase for AI. ChatGPT, one of the most popular generative AI applications, has revolutionized how humans interact with machines. This was made possible thanks to reinforcement learning with human feedback (RLHF).

In fact, ChatGPTs breakthrough was only possible because the model has been taught to align with human values. An aligned model delivers responses that are helpful (the question is answered in an appropriate manner), honest (the answer can be trusted), and harmless (the answer is not biased nor toxic).

This has been possible because OpenAI incorporated a large volume of human feedback into AI models to reinforce good behaviors. Even with human feedback becoming more apparent as a critical part of the AI training process, these models remain far from perfect and concerns about the speed and scale in which generative AI is being taken to market continue to make headlines.

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Lessons learned from the early era of the AI arms race should serve as a guide for AI practitioners working on generative AI projects everywhere. As more companies develop chatbots and other products powered by generative AI, a human-in-the-loop approach is more vital than ever to ensure alignment and maintain brand integrity by minimizing biases and hallucinations.

Without human feedback by AI training specialists, these models can cause more harm to humanity than good. That leaves AI leaders with a fundamental question: How can we reap the rewards of these breakthrough generative AI applications while ensuring that they are helpful, honest and harmless?

The answer to this question lies in RLHF especially ongoing, effective human feedback loops to identify misalignment in generative AI models. Before understanding the specific impact that reinforcement learning with human feedback can have on generative AI models, lets dive into what it actually means.

To understand reinforcement learning, you need to first understand the difference between supervised and unsupervised learning. Supervised learning requires labeled data which the model is trained on to learn how to behave when it comes across similar data in real life. In unsupervised learning, the model learns all by itself. It is fed data and can infer rules and behaviors without labeled data.

Models that make generative AI possible use unsupervised learning. They learn how to combine words based on patterns, but it is not enough to produce answers that align with human values. We need to teach these models human needs and expectations. This is where we use RLHF.

Reinforcement learning is a powerful approach to machine learning (ML) where models are trained to solve problems through the process of trial and error. Behaviors that optimize outputs are rewarded, and those that dont are punished and put back into the training cycle to be further refined.

Think about how you train a puppy a treat for good behavior and a time out for bad behavior. RLHF involves large and diverse sets of people providing feedback to the models, which can help reduce factual errors and customize AI models to fit business needs. With humans added to the feedback loop, human expertise and empathy can now guide the learning process for generative AI models, significantly improving overall performance.

Reinforcement learning with human feedback is critical to not only ensuring the models alignment, its crucial to the long-term success and sustainability of generative AI as a whole. Lets be very clear on one thing: Without humans taking note and reinforcing what good AI is, generative AI will only dredge up more controversy and consequences.

Lets use an example: When interacting with an AI chatbot, how would you react if your conversation went awry? What if the chatbot began hallucinating, responding to your questions with answers that were off-topic or irrelevant? Sure, youd be disappointed, but more importantly, youd likely not feel the need to come back and interact with that chatbot again.

AI practitioners need to remove the risk of bad experiences with generative AI to avoid degraded user experience. With RLHF comes a greater chance that AI will meet users expectations moving forward. Chatbots, for example, benefit greatly from this type of training because humans can teach the models to recognize patterns and understand emotional signals and requests so businesses can execute exceptional customer service with robust answers.

Beyond training and fine-tuning chatbots, RLHF can be used in several other ways across the generative AI landscape, such as in improving AI-generated images and text captions, making financial trading decisions, powering personal shopping assistants and even helping train models to better diagnose medical conditions.

Recently, the duality of ChatGPT has been on display in the educational world. While fears of plagiarism have risen, some professors are using the technology as a teaching aid, helping their students with personalized education and instant feedback that empowers them to become more inquisitive and exploratory in their studies.

RLHF enables the transformation of customer interactions from transactions to experiences, automation of repetitive tasks and improvement in productivity. However, its most profound effect will be the ethical impact of AI. This, again, is where human feedback is most vital to ensuring the success of generative AI projects.

AI does not understand the ethical implications of its actions. Therefore, as humans, it is our responsibility to identify ethical gaps in generative AI as proactively and effectively as possible, and from there implement feedback loops that train AI to become more inclusive and bias-free.

With effective human-in-the-loop oversight, reinforcement learning will help generative AI grow more responsibly during a period of rapid growth and development for all industries. There is a moral obligation to keep AI as a force for good in the world, and meeting that moral obligation starts with reinforcing good behaviors and iterating on bad ones to mitigate risk and improve efficiencies moving forward.

We are at a point of both great excitement and great concern in the AI industry. Building generative AI can make us smarter, bridge communication gaps and build next-gen experiences. However, if we dont build these models responsibly, we face a great moral and ethical crisis in the future.

AI is at crossroads, and we must make AIs most lofty goals a priority and a reality. RLHF will strengthen the AI training process and ensure that businesses are building ethical generative AI models.

Sujatha Sagiraju is chief product officer at Appen.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See the original post:

How reinforcement learning with human feedback is unlocking the power of generative AI - VentureBeat

Written by admin

April 25th, 2023 at 12:09 am

Posted in Machine Learning

Astronomers used AI to generate picture of black hole spotted in 2019 – Business Insider

Posted: April 17, 2023 at 12:13 am


without comments

A comparison of the original image (left), captured in 2019, and a new version supplemented by artificial intelligence that scientists believe is closer to what the black hole may actually look like. Lia Medeiros via The Associated Press

A group of astronomers released what they believe is a more accurate depiction of the M87 black hole, images they created using artificial intelligence to fill in the gaps from photos first released by researchers in 2019.

The new images, published Thursday in The Astrophysical Journal Letters, could provide important information for scientists studying the M87 black hole and others in the future, researchers said.

The original image first captured by the Event Horizon Telescope in 2017 was taken using a collection of high-powered telescopes around the globe focused on the black hole at the center of the Messier 87 galaxy. The hole is about 54 million light years away from Earth and located within the constellation Virgo.

However, as the world cannot be covered in telescopes to capture a clearer image, researchers developed a machine learning algorithm that could interpret the data from thousands of simulated images of what black holes should look like based on decades of calculations to fill in the gaps from the 2019 images, researchers said.

"With our new machine learning technique, PRIMO, we were able to achieve the maximum resolution of the current array," lead author Dr. Lia Medeiros said in a statement. "Since we cannot study black holes up-close, the detail of an image plays a critical role in our ability to understand its behavior."

Researchers said the thinner orange line around the black hole is produced by the emissions of hot gas falling into the black hole, and noted the new images still align with data captured by the Event Horizon Telescope and theoretical expectations.

They said the accuracy of the technology in analyzing the M87 black hole could allow researchers to use it to study other astronomical objects that have been captured by the Event Horizon Telescope, including Sagittarius A*, the central black hole in our own Milky Way galaxy.

Loading...

Go here to see the original:

Astronomers used AI to generate picture of black hole spotted in 2019 - Business Insider

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Machine learning used to sharpen the first image of a black hole – Digital Trends

Posted: at 12:13 am


without comments

The world watched in delight when scientists revealed the first-ever image of a black hole in 2019, showing the huge black hole at the center of galaxy Messier 87. Now, that image has been refined and sharpened using machine learning techniques. The approach, called PRIMO or principal-component interferometric modeling, was developed by some of the same researchers that worked on the original Event Horizon Telescope project that took the photo of the black hole.

That image combined data from seven radio telescopes around the globe which worked together to form a virtual Earth-sized array. While that approach was amazingly effective at seeing such a distant object located 55 million light-years away, it did mean that there were some gaps in the original data. The new machine learning approach has been used to fill in those gaps, which allows for a more sharp and more precise final image.

With our new machine-learning technique, PRIMO, we were able to achieve the maximum resolution of the current array, said lead author of the research, Lia Medeiros of the Institute for Advanced Study, in a statement. Since we cannot study black holes up close, the detail in an image plays a critical role in our ability to understand its behavior. The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.

PRIMO was trained using tens of thousands of example images which were created from simulations of gas accreting onto a black hole. By analyzing the pictures that resulted from these simulations for patterns, PRIMO was able to refine the data for the EHT image. The plan is that the same technique can be used for future observations from the EHT collaboration as well.

PRIMO is a new approach to the difficult task of constructing images from EHT observations, said another of the researchers, Tod Lauer of NSFs NOIRLab. It provides a way to compensate for the missing information about the object being observed, which is required to generate the image that would have been seen using a single gigantic radio telescope the size of the Earth.

In 2022, the EHT collaboration followed up its image of the black hole in M87 with a stunning image of the black hole at the heart of the Milky Way, so that image could be the next target for sharpening using this technique.

The 2019 image was just the beginning, said Medeiros. If a picture is worth a thousand words, the data underlying that image have many more stories to tell. PRIMO will continue to be a critical tool in extracting such insights.

The research is published in The Astrophysical Journal Letters.

See original here:

Machine learning used to sharpen the first image of a black hole - Digital Trends

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning


Page 3«..2345..1020..»



matomo tracker