Page 4«..3456..1020..»

Archive for the ‘Machine Learning’ Category

Revolutionize Your Factor Investing with Machine Learning | by … – DataDrivenInvestor

Posted: April 17, 2023 at 12:13 am


without comments

Zero to one in Financial ML Developer with SKlearn

Factor investing has gained significant popularity in the field of modern portfolio management. It refers to a systematic investment approach that focuses on specific risk factors or investment characteristics, such as value, size, momentum, and quality, to construct portfolios. The blossoming of machine learning in factor investing has its source at the confluence of three favorable developments: data availability, computational capacity, and economic groundings. So this article will discuss Factor Investing with Machine Learning.

Factor investing has gained significant popularity in the field of modern portfolio management. It refers to a systematic investment approach that focuses on specific risk factors or investment characteristics, such as value, size, momentum, and quality, to construct portfolios. These factors are believed to have historically exhibited persistent risk premia that can be exploited to achieve better risk-adjusted returns.

The historical roots of factor investing can be traced back to several decades ago. In the 1960s, pioneering academics like Eugene F. Fama and Kenneth R. French conducted groundbreaking research that laid the foundation for modern factor investing. They identified specific risk factors that could explain the variation in stock returns, such as the value factor (stocks with low price-to-book ratios tend to outperform), the size factor (small-cap stocks tend to outperform large-cap stocks), and the momentum factor (stocks with recent positive price trends tend to continue outperforming).

By incorporating factor-based strategies into their portfolios, investors can aim to achieve enhanced diversification, improved risk-adjusted returns, and better risk management. Factor investing provides an alternative approach to traditional market-cap-weighted strategies, allowing investors to potentially capture excess returns by focusing on specific risk factors that have demonstrated historical performance.

The main reason why Factor Investing is so popular may be because Factor Investing can be easily explained in investment management.

In my last article, I talk about Financial Machine Learning with Scikit-Learn and use 200+ Financial Indicators of US stocks (20142018) dataset from Kaggle. In this article, I will use the same dataset. So, You can view details from my last article.

The common procedure and the one used in Fama and French (1992). The idea is simple. On one date,

We will start with a simple factor, Size Factor. We will cluster stocks into two groups Big/Small, Well see if size can generate alpha.

# Filter rows based on the variableBig_Cap = df_2014[df_2014['Market Cap'] > filter_value]Small_Cap = df_2014[df_2014['Market Cap'] < filter_value]

print("Big cap alpha", Big_Cap["Alpha"].mean())print("Small cap alpha", Small_Cap["Alpha"].mean())

We can see that in 2014 Big cap have alpha 3.91, Small cap stocks -3.28.

Lets look at some other years.

for Year in Year_lst :

data = df_all[df_all['Year'] == Year]filter_value = data[["Market Cap"]].median()[0]Big_Cap = data[data['Market Cap'] > filter_value]

print("Year : ", Year )print("Big cap alpha", Big_Cap["Alpha"].mean())

dic_alpha_bigcap [Year] = Big_Cap["Alpha"].mean()

for Year in Year_lst :

data = df_all[df_all['Year'] == Year]filter_value = data[["Market Cap"]].median()[0]Small_Cap = data[data['Market Cap'] < filter_value]

print("Year : ", Year )print("Small cap alpha", Small_Cap["Alpha"].mean())

dic_alpha_Small_Cap [Year] = Small_Cap["Alpha"].mean()

At this point, we may conclude that the year 2014 -2019 Big cap. perform more Small cap.We will do this for all variables.

Year_lst = ["2014", "2015", "2016", "2017", "2018"] dic_alpha = {}

for Year in Year_lst :data = df_all[df_all['Year'] == Year]filter_value = data[[column]].median()[0]

filterdata = data[data[column] > filter_value]

print("Year : ", Year )print(column, filterdata["Alpha"].mean())

dic_alpha[Year] = filterdata["Alpha"].mean()

return dic_alpha

def filter_data_below (column,df_all ) :

Year_lst = ["2014", "2015", "2016", "2017", "2018"] dic_alpha = {}

for Year in Year_lst :data = df_all[df_all['Year'] == Year]filter_value = data[[column]].median()[0]

filterdata = data[data[column] < filter_value]

print("Year : ", Year )print(column, filterdata["Alpha"].mean())

dic_alpha[Year] = filterdata["Alpha"].mean()

return dic_alpha

for factor in df_all.columns[2:] :try: print(factor)dic_alpha = filter_data_top (column= factor,df_all = df_all)

df__alpha = pd.DataFrame.from_dict(dic_alpha, orient='index',columns=[factor])lst_alpha_all_top.append(df__alpha)

except:print("error")

Finally, we get a alpha table.

You can plot.

When analyzing stock returns, if we assume that all returns of all stocks are stacked in a vector r, and x is a lagged variable that exhibits predictive power in a regression analysis, it may be tempting to conclude that x is a good predictor of returns if the estimated coefficient b-hat is statistically significant based on a specified threshold. To test the importance of x as a factor in predicting returns, we can use Factor Importance Tests, where x is treated as the factor and y is the alpha. In the Fama and French equation, y is typically represented as Return, but it can also be interpreted as Return minus Ri and Ri-Rf, which is essentially the alpha. While we wont delve into the details of this section here, you can refer to the article at this if you are interested in learning more about this topic.

Note : We need to change the variable to percentile

for Year in Year_lst :

data = df_all[df_all['Year'] == Year]df_rank = df_all.rank(pct=True)

dic_rank[Year] = df_rankdf_rank = pd.concat(dic_rank)

Ill introduce another technic call Maximal Information Coefficient Maximal Information Coefficient (MIC) is a statistical measure that quantifies the strength and non-linearity of association between two variables. It is used to assess the relationship between two variables and determine if there is any significant mutual information or dependence between them. MIC is particularly useful in cases where the relationship between two variables is not linear, as it can capture non-linear associations that may be missed by linear methods such as correlation.

MIC is considered important because it offers several advantages:

We use minepy for test MIC.

mine = MINE( est="mic_approx")mine.compute_score(X_, y)mine.mic()

We have previously explained how factor investing works. Machine learning (ML) techniques can be used to improve factor investing in various ways. ML algorithms can help identify relevant factors with predictive power, optimize the combination of factors, determine the optimal timing of factor exposures, enhance risk management, optimize portfolio construction, and incorporate alternative data sources. By analyzing historical data and applying ML algorithms, investors can identify factors, optimize their combination, and dynamically adjust exposures based on market conditions. ML techniques can also enhance risk management measures and incorporate alternative data for better insights.

We use linear regression using the LinearRegression class from the sklearn.linear_model module in Python. The goal is to fit a linear regression model to the data in df_rank DataFrame to predict the values of the dependent variable, denoted as y, using the values of the independent variable, denoted as X, which is derived from the "PE ratio" column of df_rank.

Import LinearRegression

This line imports the LinearRegression class from the sklearn.linear_model module, which provides implementation of linear regression in scikit-learn, a popular machine learning library in Python.

Define the dependent variable:

This line assigns the Alpha column of df_all DataFrame to the variable y, which represents the dependent variable in the linear regression model.

This line assigns a DataFrame containing the PE ratio column of df_rank DataFrame to the variable X, which represents the independent variable in the linear regression model. The fillna() method is used to fill any missing values in the "PE ratio" column with the mean value of the column.

This line creates an instance of the LinearRegression class and assigns it to the variable PElin_reg.

Fit the linear regression model:

This line fits the linear regression model to the data, using the values of X as the independent variable and y as the dependent variable.

Extract the model coefficients:

These lines extract the intercept and coefficient(s) of the linear regression model, which represent the estimated parameters of the model. The intercept is accessed using the intercept_ attribute, and the coefficient(s) are accessed using the coef_ attribute. These values can provide insights into the relationship between the independent variable(s) and the dependent variable in the linear regression model.

Repeat with ROE

ROElin_reg = LinearRegression()ROElin_reg.fit(X, y)ROElin_reg.intercept_, ROElin_reg.coef_

And plot

X1 = df_rank[["PE ratio"]].fillna(value=df_rank["PE ratio"].mean() )X2 = df_rank[["ROE"]].fillna(value=df_rank["ROE"].mean() )

plt.figure(figsize=(6, 4)) # extra code not needed, just formattingplt.plot( X1, PElin_reg.predict(X1), "r-", label="PE")plt.plot(X2 , ROElin_reg.predict(X2) , "b-", label="ROE")

# extra code beautifies and saves Figure 42plt.xlabel("$x_1$")plt.ylabel("$y$", rotation=0)# plt.axis([1, 1, ])plt.grid()plt.legend(loc="upper left")

plt.show()

We can see that the PE line has a higher slope( coef_). Therefore, we can say that PE can predict an Alpha value.If we have 3 stocks and we want to know which one will perform.

Here we will try to predict Alphas rank from 10 variables.

repeat

top_fest = featureScores.nlargest(10,'Score')["Specs"].tolist()

y = df_rank["Alpha"]X = df_rank[top_fest].fillna(value=df_rank[top_fest].mean() )

lin_reg = LinearRegression()lin_reg.fit(X, y)

lin_reg.predict(X)

Handle data

print MSE

mean_squared_error(Y1, Y2)

MSE = 0.07 , it indicates that, on average, the squared differences between the predicted values and the actual values (i.e., the residuals) are relatively small. A lower MSE generally indicates better model performance, as it signifies that the models predictions are closer to the actual values.

But,

X = df_["Unnamed: 0"].head(30)Y1 = df_["Alpha_rank"].head(30)Y2 = df_["Predict_Alpha"].head(30)

plt.figure(figsize=(6, 6)) # extra code not needed, just formattingplt.plot( X, Y1, "r.", label="Alpha")plt.plot( X , Y2 , "b.", label="Predict_Alpha")

# extra code beautifies and saves Figure 42plt.xlabel("$x_1$")plt.ylabel("$y$", rotation=0)# plt.axis([1, 1, ])plt.grid()plt.legend(loc="upper left")

plt.show()

We can see that, Although MSE its less but, We can see that the predicted value is in the middle. That is because of the limitations of the Liner regression model.

Now we will look at a very different way to train a linear regression model, which is better suited for cases where there are a large number of features or too many training instances to fit in memory.

Gradient descent is an optimization algorithm commonly used in machine learning to minimize a loss or cost function during the training process of a model. It is an iterative optimization algorithm that adjusts the model parameters in the direction of the negative gradient of the cost function in order to find the optimal values for the parameters that minimize the cost function.

sgd_reg = SGDRegressor(max_iter=100000, tol=1e-5, penalty=None, eta0=0.01,n_iter_no_change=100, random_state=42)sgd_reg.fit(X, y.ravel()) # y.ravel() because fit() expects 1D targets

Sometimes we have too many variables. We want to fit training data much better than plain linear regression.This case we use 100 variables. We use learning_curve function to help us.

train_sizes, train_scores, valid_scores = learning_curve(LinearRegression(), X, y, train_sizes=np.linspace(0.01, 1.0, 40), cv=5,scoring="neg_root_mean_squared_error")train_errors = -train_scores.mean(axis=1)valid_errors = -valid_scores.mean(axis=1)

plt.figure(figsize=(6, 4)) # extra code not needed, just formattingplt.plot(train_sizes, train_errors, "r-+", linewidth=2, label="train")plt.plot(train_sizes, valid_errors, "b-", linewidth=3, label="valid")

# extra code beautifies and saves Figure 415plt.xlabel("Training set size")plt.ylabel("RMSE")plt.grid()plt.legend(loc="upper right")#plt.axis([0, 80, 0, 2.5])

plt.show()

The optimal Training set size data is the red line close to the blue line. If the red line is much lower than the blue line, it is called Underfit. On the other hand, If the red line is much upper than the blue line Overfit.

At this point, we will get the coefficient. The coefficient can tell what factors determine returns.We still have a lot of details that we skipped(Normalization, Factor engineering, etc.)Other algorithms(Regularized Linear Models,Lasso Regression,Elastic Net) and Including the use of ML in various steps.

Machine Learning for Factor Investing by Guillaume Coqueret

https://colab.research.google.com/drive/1bXpSC-rln-yhmqd7Et06Y-ZkhmhC8joP?usp=sharing

See more here:

Revolutionize Your Factor Investing with Machine Learning | by ... - DataDrivenInvestor

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

AI and Machine Learning in Healthcare for the Clueless – Medscape

Posted: at 12:13 am


without comments

Recorded March 6, 2023. This transcript has been edited for clarity.

Robert A. Harrington, MD: Hi. This is Bob Harrington on theheart.org | Medscape Cardiology, and I'm here at the American College of Cardiology meetings in New Orleans, having a great time, by the way. It's really fun to be back live, in person, getting to see friends and colleagues, seeing live presentations, etc. If you've not been to a live meeting yet over the course of the past couple of years, please do start coming again, whether it's American College of Cardiology, American Heart Association, or European Society of Cardiology. It's fantastic.

Putting that aside, I've been learning many things at this meeting, particularly around machine learning, artificial intelligence (AI), and some of the advanced computational tools that people in the data-science space are using.

I'm fortunate to have an expert and, really, a rising thought leader in this field, Dr Jenine John. Jenine is a machine-learning research fellow at Brigham and Women's Hospital, working in Calum MacRae'sresearch group.

What she talked about on stage this morning is what do you have to know about this whole field. I thought we'd go through some of the basic concepts of data science, what machine learning is, what AI is, and what neural networks are.

How do we start to think about this? As practitioners, we're going to be faced with how to incorporate some of this into our practice. You're seeing machine-learning algorithms put into your clinical operations. You're starting to see ways that people are thinking about, for example, Can the machine read the echocardiogram as good as we can? What's appropriate for the machine? What's appropriate for us? What's the oversight of all of this?

We'll have a great conversation for the next 12-20 minutes and see what we can all learn together. Jenine, thank you for joining us here today.

Jenine John, MD: Thank you for having me.

Harrington: Before we get into the specifics of machine learning and what you need to know, give me a little bit of your story. You obviously did an internal medicine residency. You did a cardiology fellowship. Now, you're doing an advanced research fellowship. When did you get bitten by the bug to want to do data science, machine learning, etc.?

John: It was quite late, actually. After cardiology fellowship, I went to Brigham and Women's Hospital for a research fellowship. I started off doing epidemiology research, and I took classes at the public health school.

Harrington: The classic clinical researcher.

John: Exactly. That was great because I gained a foundation in epidemiology and biostatistics, which I believe is essential for anyone doing clinical research. In 2019, I was preparing to write a K grant, and for my third aim, I thought, Oh, I want to make this complex model that uses many variables. This thing called machine learning might be helpful. I basically just knew the term but didn't know much about it.

I talked to my program director who led me to Dr Rahul Deo and Dr Calum MacRae's group that's doing healthcare AI. Initially, I thought I would just collaborate with them.

Harrington: Have their expertise brought into your grant and help to elevate the whole grant? That's the typical thing to do.

John: Exactly. As I learned a bit more about machine learning, I realized that this is a skill set I should really try to develop. I moved full-time into that group and learned how to code and create machine-learning models specifically for cardiac imaging. Six months later, the pandemic hit, so everything took a shift again.

I believe it's a shift for the better because I was exposed to everything going on in digital health and healthcare startups. There was suddenly an interest in monitoring patients remotely and using tech more effectively. I also became interested in how we are applying AI to healthcare and how we can make sure that we do this well.

Harrington: There are a couple of things that I want to expand on. Maybe we'll start this way. Let's do the definitions. How would you define AI and its role in medicine? And then, talk about a subset of that. Define machine learning for the audience.

John: Artificial intelligence and machine learning, the two terms are used pretty much synonymously within healthcare, because when we talk about AI in healthcare, really, we're talking about machine learning. Some people use the term AI differently. They feel that it's only if a system is autonomously thinking independently that you can call it AI. For the purposes of healthcare, we pretty much use them synonymously.

Harrington: For what we're going to talk about today, we'll use them interchangeably.

John: Yes, exactly.

Harrington: Define machine learning.

John: Machine learning is when a machine uses data and learns from the data. It picks up patterns, and then, it can basically produce output based on those patterns.

Harrington: Give me an example that will resonate with a clinical audience. You're an imager, and much of the work so far has been in imaging.

John: Imaging is really where machine learning shines. For example, you can use machine learning on echocardiograms, and you can use it to pick up whether this patient has valvular disease or not. If you feed an AI model enough echocardiograms, it'll start to pick up the patterns and be able to tell whether this person has valvular disease or not.

Harrington: The group that you're working with has been very prominent in being able to say whether they have hypertrophic cardiomyopathy, valve disease, or amyloid infiltrative disease.

There are enough data there that the machine starts to recognize patterns.

John: Yes.

Harrington: You said that you were, at the Harvard School of Public Health, doing what I'll call classic clinical research training. I had the same training. I was a fellow 30-plus years ago in the Duke Databank for Cardiovascular Diseases, and it was about epidemiology and biostatistics and how to then apply those to the questions of clinical research.

You were doing very similar things, and you said something this morning in your presentation that stuck with me. You said you really need to understand these things before you make the leap into trying to understand machine learning. Expand on that a little bit.

John: I think that's so important because right now, what seems to happen is you have the people the data scientists and clinicians and they seem to be speaking different languages. We really need more collaboration and getting on the same page. When clinicians go into data science, I think the value is not in becoming pure data scientists and learning to create great machine-learning models. Rather, it's bringing that clinical thinking and that clinical research thinking, specifically, to data science. That's where epidemiology and biostatistics come in because you really need to understand those concepts so that you understand which questions you should be asking. Are you using the right dataset to ask those questions? Are there biases that could be present?

Harrington: Every week, as you know, we all pick up our journals, and there's a machine-learning paper in one of the big journals all the time. Some of the pushback you'll hear, whether it's on social media or in letters to the editors, is why did you use machine learning for this? Why couldn't you use classical logistic regression?

One of the speakers in your session, I thought, did a nice job of that. He said that often, standard conventional statistics are perfectly fine. Then there are some instances where the machine is really better, and imaging is a great example. Would you talk to the audience a little bit about that?

John: I see it more as a continuum. I think it's helpful to see it that way because right now, we see traditional biostatistics and machine learning as completely different. Really, it's a spectrum of tools. There are simple machine-learning methods where you can't really differentiate much from statistical methods, and there's a gray zone in the middle. For simpler data, such as tabular data, maybe.

Harrington: Give the audience an example of tabular data.

John: For example, if you have people who have had a myocardial infarction (MI), and then you have characteristics of those individuals, such as age, gender, and other factors, and you want to use those factors to predict who gets an MI, in that instance, traditional regression may be best. When you get to more complex data, that's where machine learning really shines. That's where it gets exciting because they are questions that we haven't been able to ask before with the methods that we have. Those are the questions that we want to start using machine learning to answer.

Harrington: We've all seen papers again over the past few years. The Mayo Group has published a series of these about information that you can derive from the EKG. You can derive, for example, potassium levels from the EKG. Not the extremes that we've all been taught, but subtle perturbations. I think I knew this, but I was still surprised to hear it when one of your co-speakers said that there are over 30,000 data points in the typical EKG.

There's no way you can use conventional statistics to understand that.

John: Exactly. One thing I was a little surprised to see is that machine learning does quite well with estimating the age of the individual on the EKG. If you show a cardiologist an EKG, we could get an approximate estimate, but we won't be as good as the machine. Modalities like EKG and echocardiogram, which have so many more data points, are where the machine can find patterns that even we can't figure out.

Harrington: The secret is to ingest a huge amount of data. One of the things that people will ask me is, "Well, why is this so hot now?" It's hot now for a couple of reasons, one of which is that there's an enormous amount of data available. Almost every piece of information can be put into zeros and ones. Then there's cloud computing, which allows the machine to ingest this enormous amount of information.

You're not going to tell the age of a person from a handful of EKGs. It's thousands to millions of EKGs that machines evaluated to get the age. Is that fair?

John: This is where we talk about big data because we need large amounts of data for the machine to learn how to interpret these patterns. It's one of the reasons I'm excited about AI because it's stimulating interest in multi-institution collaborations and sharing large datasets.

We're annotating, collecting, and organizing these large multi-institutional datasets that can be used for a variety of purposes. We can use the full range of analytic approaches, machine learning or not, to learn more about patients and how to care for them.

Harrington: I've heard both Calum and Rahul talk about how they can get echocardiograms, for example, from multiple institutions. As the machine gets better and better at reading and interpreting the echocardiograms or looking for patterns of valvular heart disease, they can even take a more limited imaging dataset and apply what they've learned from the larger expanded dataset, basically to improve the reading of that echocardiogram.

One of the things it's going to do, I think, is open up the opportunity for more people to contribute their data beyond the traditional academics.

John: Because so much data are needed for AI, there's a role for community centers and other institutions to contribute data so that we can make robust models that work not only in a few academic centers but also for the majority of the country.

Harrington: There are two more topics I want to cover. We've been, in some ways, talking about the hope of what we're going to use this for to make clinical medicine better. There's also what's been called the hype, the pitfalls, and the perils. Then I want to get into what do you need to know, particularly if you're a resident fellow, junior faculty member.

Let's do the perils and the hype. I hear from clinicians, particularly clinicians of my generation, that this is just a black box. How do I know it's right? People point to, for example, the Epic Sepsis Model, which failed miserably, with headlines all over the place. They worry about how they know whether it's right.

John: That's an extremely important question to ask. We're still in the early days of using AI and trying to figure out the pitfalls and how to avoid them. I think it's important to ask along the way, for each study, what is going on here. Is this a model that we can trust and rely on?

I also think that it's not inevitable that AI will transform healthcare just yet because we are so early on, and there is hype. There are some studies that aren't done well. We need more clinicians understanding machine learning and getting involved in these discussions so that we can lead the field and actually use the AI to transform healthcare.

Harrington: As you push algorithms into the healthcare setting, how do we evaluate them to make sure that the models are robust, that the data are representative, and that the algorithm is giving us, I'll call it, the right answer?

John: That's the tough part. I think one of the tools that's important is a prospective trial. Not only creating an algorithm and implementing right away but rather studying how it does. Is it actually working prospectively before implementing it?

We also need to understand that in healthcare, we can't necessarily accept the black box. We need explainability and interpretability, to get an understanding of the variables that are used, how they're being used within the algorithm, and how they're being applied.

One example that I think is important is that Optum created a machine-learning model to predict who was at risk for medical complications and high healthcare expenditures. The model did well, so they used the model to determine who should get additional resources to prevent these complications.

It turns out that African Americans were utilizing healthcare less, so their healthcare expenditure was lower. Because of that, the algorithm was saying these are not individuals who need additional resources.

Harrington: It's classic confounding.

John: There is algorithmic bias that can be an issue. That's why we need to look at this as clinical researchers and ask, "What's going on here? Are there biases?"

Harrington: One of the papers over the past couple of years came from one of our faculty members at Stanford, which looked at where the data are coming from for these models. It pointed out that there are many states in this country that contribute no data to the AI models.

That's part of what you're getting at, and that raises all sorts of equity questions. You're in Massachusetts. I'm in California. There is a large amount of data coming from those two states. From Mississippi and Louisiana, where we are now, much less data. How do we fix that?

John: I think we fix it by getting more clinicians involved. I've met so many passionate data scientists who want to contribute to patient care and make the world a better place, but they can't do it alone. They can't recruit health centers in Mississippi. We need clinicians and clinical researchers who will say, "I want to help with advancing healthcare, and I want to contribute data so that we can make this work." Currently, we have so many advances in some ways, but AI can open up so many new opportunities.

Harrington: There's a movement to assure that the algorithm is fair, right? That's the acronym that's being used to make sure that the data are representative of the populations that you're interested in and that you've eliminated the biases.

I'm always intrigued. When you talk to your friends in the tech world, they say, "Well, we do this all the time. We do A/B testing." They just constantly run through algorithms through A/B testing, which is a randomized study. How come we don't do more of that in healthcare?

John: I think it's complicated because we don't have the systems to do that effectively. If we had a system where patients come into the emergency room and we're using AI in that manner, then maybe we could start to incorporate some of these techniques that the tech industry uses. That's part of the issue. One is setting up systems to get the right data and enough data, and the other is how do we operationalize this so that we can effectively use AI within our systems and test it within our systems.

Harrington: As a longtime clinical researcher and clinical trialist, I've always asked why it is that clinical research is separate from the process of clinical care.

If we're going to effectively evaluate AI algorithms, for example, we've got to break down those barriers and bring research more into the care domain.

John: Yes. I love the concept of a learning health system and incorporating data and data collection into the clinical care of patients.

Harrington: Fundamentally, I believe that the clinicians make two types of decisions, one of which is that the answer is known. I always use the example of aspirin if you're having an ST-segment elevation MI. That's known. It shouldn't be on the physician to remember that. The system and the algorithms should enforce that. On the other hand, for much of what we do, the answer is not known, or it's uncertain.

Why don't we allow ongoing randomization to help us decide what is appropriate? We're not quite there yet, but I hope before the end of my career that we can push that closer together.

All right. Final topic for you. You talked this morning about what you need to know. Cardiology fellows and residents must approach you all the time and say, "Hey, I want to do what you do," or, "I don't want to do what you do because I don't want to learn to code, but I want to know how to use it down the road."

What do you tell students, residents, and fellows that they need to know?

John: I think all trainees and all clinicians, actually, should understand the fundamentals of AI because it is being used more and more in healthcare, and we need to be able to understand how to interpret the data that are coming out of AI models.

I recommend looking up topics as you go along. Something I see is clinicians avoid papers that involve AI because they feel they don't understand it. Just dive in and start reading these papers, because most likely, you will understand most of it. You can look up topics as you go along.

There's one course I recommend online. It's a free course through Coursera called AI in Healthcare Specialization. It's a course by Stanford, and it does a really good job of explaining concepts without getting into the details of the coding and the math.

Other than that, for people who want to get into the coding, first of all, don't be afraid to jump in. I recently talked to a friend who is a gastroenterologist, and she said, "I'd love to get into AI, but I don't think I'd be good at it." I asked, "Well, why not?" She said, "Because men tend to be good at coding."

I do not think that's true.

Harrington: I don't think that's true either.

John: It's interesting because we're all affected to some extent by the notions that society has instilled in us. Sometimes it takes effort to go beyond what you think is the right path or what you think is the traditional way of doing things, and ask, "What else is out there. What else can I learn?"

If you do want to get into coding, I would say that it's extremely important to join a group that specializes in healthcare AI because there are so many pitfalls that can happen. There are mistakes that could be made without you realizing it if you try to just learn things on your own without guidance.

Harrington: Like anything else, join an experienced research group that's going to impart to you the skills that you need to have.

John: Exactly.

Harrington: The question about women being less capable coders than men, we both say we don't believe that, and the data don't support that. It's interesting. At Stanford, for many years, the most popular major for undergraduate men has been computer science. In the past few years, it's also become the most popular major for undergrad women at Stanford.

We're starting to see, to your point, that maybe some of those attitudes are changing, and there'll be more role models like you to really help that next generation of fellows.

Final question. What do you want to do when you're finished?

John: My interests have changed, and now I'm veering away from academia and more toward the operational side of things. As I get into it, my feeling is that currently, the challenge is not so much creating the AI models but rather, as I said, setting up these systems so that we can get the right data and implement these models effectively. Now, I'm leaning more toward informatics and operations.

I think it's an evolving process. Medicine is changing quickly, and that's what I would say to trainees and other clinicians out there as well. Medicine is changing quickly, and I think there are many opportunities for clinicians who want to help make it happen in a responsible and impactful manner.

Harrington: And get proper training to do it.

John: Yes.

Harrington: Great. Jenine, thank you for joining us. I want to thank you, the listeners, for joining us in this discussion about data science, artificial intelligence, and machine learning.

My guest today on theheart.org | Medscape Cardiology has been Dr Jenine John, who is a research fellow at Brigham and Women's Hospital, specifically in the data science and machine learning realm.

Again, thank you for joining.

Robert A. Harrington, MD, is chair of medicine at Stanford University and former president of the American Heart Association. (The opinions expressed here are his and not those of the American Heart Association.) He cares deeply about the generation of evidence to guide clinical practice. He's also an over-the-top Boston Red Sox fan.

Follow Bob Harrington on Twitter

Follow theheart.org | Medscape Cardiology on Twitter

Follow Medscape on Facebook, Twitter, Instagram, and YouTube

Continued here:

AI and Machine Learning in Healthcare for the Clueless - Medscape

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

The power of AI and machine learning in cloud integrations – Arizona Big Media

Posted: at 12:13 am


without comments

Cloud integration has become an essential aspect of modern businesses. With more companies adopting cloud computing, integrating AI and machine learning into cloud solutions is a game-changer for organizations looking to streamline their operations and stay ahead in a competitive market.

In this blog post, we'll explore the power of AI and machine learning in cloud integrations, and how they are transforming the way companies approach data management, analytics, and decision-making.

According to DoiT International, AI-powered cloud storage systems can analyze data to optimize storage allocation and reduce redundancy. Machine learning algorithms can identify patterns and predict storage needs, allowing for automated capacity planning and the ability to adapt to changes in storage requirements.

This not only enhances the efficiency of data storage but also reduces costs associated with manual data management.

AI and machine learning can be harnessed to detect and prevent data breaches in cloud environments. By monitoring data access patterns, machine learning algorithms can identify anomalous behavior and alert security teams to potential threats.

AI-driven security measures can also help organizations meet compliance requirements, protecting sensitive data and ensuring privacy.

AI and machine learning can be used to automate and optimize business processes in the cloud. With the ability to analyze vast amounts of data, these technologies can identify bottlenecks and inefficiencies, providing actionable insights for process improvement.

By automating repetitive tasks, organizations can reduce operational costs and focus on more strategic initiatives.

AI-powered predictive analytics enable organizations to leverage historical data to make informed decisions about the future. By utilizing machine learning algorithms, these tools can identify trends and make accurate predictions, empowering businesses to make data-driven decisions and plan for long-term success.

This can be especially useful in industries such as retail, finance, and healthcare, where forecasting is crucial.

AI and machine learning can help organizations better understand their customers and deliver personalized experiences.

By analyzing customer data in real-time, these technologies can identify patterns and trends, enabling companies to tailor marketing campaigns, product offerings, and support services. This results in improved customer satisfaction and increased loyalty.

Cloud integrations powered by AI and machine learning can facilitate seamless collaboration across departments and locations.

These technologies can automate the flow of information and help identify critical insights, enabling teams to work more efficiently and effectively. This can lead to increased productivity, better decision-making, and faster innovation.

AI-driven cloud integrations offer unparalleled scalability, allowing organizations to grow and adapt to changing business needs. As machine learning algorithms continue to learn and improve, cloud services can be fine-tuned to meet the specific requirements of a business.

This flexibility ensures that organizations can scale their infrastructure and services as needed, without incurring unnecessary costs.

Incorporating AI and machine learning into cloud integrations can significantly improve energy efficiency and contribute to environmental sustainability. Data centers, which are the backbone of cloud computing, consume a substantial amount of energy. AI-driven algorithms can optimize data center operations by predicting workload demands and making real-time adjustments to resource allocation.

This allows for reduced energy consumption, minimized carbon footprint, and lower operational costs.

Additionally, machine learning can be used to analyze and optimize the energy usage patterns of cloud-based applications, further contributing to a greener and more sustainable future.

The integration of AI and machine learning in cloud solutions is revolutionizing the way businesses operate, enabling them to harness the power of data, improve efficiency, and stay competitive in today's digital landscape.

By leveraging these advanced technologies, organizations can enhance data management, security, analytics, and customer experiences, all while benefiting from the scalability and flexibility that cloud computing offers. As AI and machine learning continue to evolve, we can expect to see even more innovative applications that will transform the way we work and live.

The rest is here:

The power of AI and machine learning in cloud integrations - Arizona Big Media

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Pentagon goes on AI hiring spree to bring machine learning capabilities to the battlefield – Fox News

Posted: at 12:13 am


without comments

The Pentagon is hiring data scientists, technologists and engineers as part of its effort to incorporate artificial intelligence into the machinery used to wage war.

The Defense Department has posted several AI jobs on USAjobs.gov over the last few weeks, including many with salaries well into six figures.

One of the higher paying jobs advertised in the last few weeks is for a senior technologist for "cognitive and decision science" at the U.S. Navys Point Loma Complex in San Diego. That job starts at $170,000 and could pay as much as $212,000 year for someone who can help insert "cutting-edge technology" into Navy weaponry and equipment.

NAVY SEES AI IN ITS FUTURE BUT ADMITS WE STILL HAVE A LOT TO LEARN

The Pentagon, led by Secretary of Defense Lloyd Austin, is looking to pay as much as $200,000 per year for AI specialists who can help automate and speed up decision-making in the military. (AP Photo/Alex Brandon)

That includes technologies such as "augmented reality, artificial intelligence, human state monitoring, and autonomous unmanned systems."

The Navy is also looking to hire a manager at Naval Information Warfare Systems Command in South Carolina to work on adding AI applications to support the "expeditionary warfighting, decision intelligence and support functions of the military," and a data scientist to help look for ways to incorporate AI into the decision-making processes of Naval Special Warfare Command.

This month, Chief of Naval Operations Michael Gilday said the Navy was moving quickly to use AI and said he imagined the use of "minimally manned" ships before moving to fully autonomous ships.

AI CHATBOT CHATGPT CAN INFLUENCE HUMAN MORAL JUDGMENTS, STUDY SAYS

Chief of Naval Operations Admiral Michael Gilday has said the Navy is looking for semi-autonomous ships as a stepping stone to full automation, and says AI is a big part of the equation. (Chip Somodevilla/Getty Images)

The Department of Defenses Chief Digital and Artificial Intelligence Office (CDAO) is also looking to hire two AI experts at jobs that start at $155,000. CDAO, which was established last year, is working to accelerate the use of AI to "generate decision advantage" in wartime the office is looking to hire a supervisory program manager and a supervisory computer scientist.

Elsewhere, the Air Force is looking to hire a senior scientist in the field of "human machine teaming" who will guide programs in which "humans, machines, artificial intelligence, autonomous systems, and technology-centric solutions are the focus of the research." That job, based at Wright-Patterson Air Force Base in Ohio, starts at $156,000.

ARTIFICIAL INTELLIGENCE: SHOULD THE GOVERNMENT STEP IN? AMERICANS WEIGH IN

The Pentagon is looking at "human machine teaming" as it examines how to make faster, more effective decisions in wartime. (Reuters/Ints Kalnins)

The U.S. Armys Futures Command headquarters in Austin, Texas, is hiring a systems integration director to work with AI and other technology to "provide warfighters with the concepts and future force designs needed to dominate a future battlefield."

CLICK HERE TO GET THE FOX NEWS APP

And the National Geospatial-Intelligence Agency in Springfield, Virginia, is looking for a senior scientist for analytic technologies to research "machine learning and artificial intelligence methods to automate the analysis of images, video or other sensor data; modeling for anticipatory intelligence; human-machine teaming," among other things. That job starts at $141,000 per year.

Visit link:

Pentagon goes on AI hiring spree to bring machine learning capabilities to the battlefield - Fox News

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Physically informed machine-learning algorithms for the … – Nature.com

Posted: at 12:13 am


without comments

Optical images used in this study include transition metal dichalcogenide (TMD) flakes on SiO2/Si substrate, TMD flakes on Polydimethylsiloxane (PDMS), and TMD flakes on SiO2/Si and PDMS (if any). The usage of multiple types of substrates models more realistic flake fabrication environments and strengthens algorithm robustness. All these samples were mechanically exfoliated in a 99.999% N2-filled glove box (Fig.1a). The optical images were also acquired in the same environment with no exposure to ambient conditions occurring between fabrication and imaging processes (Fig.1b). The 83 MoSe2 images used throughout this study were taken at the 100 magnification by various members of the Hui Deng group who selected different amounts of light to illuminate the sample (Fig.1c). These images are divided into four smaller symmetric images containing randomized amounts of flake and bulk material which were then manually reclassified (Fig.1d).

MoSe2 flake fabrication and image collection and processing. (a) Mechanical exfoliation of MoSe2 with scotch tape to produce flakes which are then (b) imaged with optical microscopy. (c) A typical optical image of a flake and surrounding bulk material with a masked version of the image below which only displays the flake in white. (d) The four resulting images when the original image in (c) is divided with the masked version below. (e) The resulting 30 images produced through the augmentation methods of padding, rotating, flipping, and color jitter. (f) The image recreated with 20 colors again with the masked version below.

The extremely time-consuming process of locating a flake renders these datasets small, a common occurrence in many domains such as medical sciences and physics. However, deep learning models, such as CNNs, usually contain numerous parameters to learn and require large-scale data to train on to avoid severe overfitting. Data augmentation is a practical solution to this problem24. By generating new samples based on existing data, data augmentation produces training data with boosted diversity and sample sizes, on which better performing deep learning models can be trained (see Supplementary methods).The benefit of applying data augmentation is two-fold. First, it enlarges the data that CNNs are trained on. Second, the randomness induced by the augmentation of the data encourages the CNNs to capture and extract spatially invariant features to make predictions, improving the robustness of the models24. In fact, augmentation is quite common when using CNNs even with large datasets for this reason. Typically, different augmented images are generated on the fly during the model training period, which further helps models to extract robust features. Due to limited computing resources, we generated augmented data prior to fitting any models, expanding the data from 332 to 10,292 images (Fig.1e).

Once augmented, we applied color quantization to all images (Fig.1f). The quantization decreased noise and image colors to a manageable number necessary for extracting the tree-based algorithms features. The color quantization algorithm uses a pixel-wise Vector Quantization to reduce colors within the image to a desired quantity while preserving the original quality16. We employed a K-means clustering to locate the desired number of color cluster centers using a single byte and pixel representation in 3D space. The K-means clustering trains on a small sample of the image and then predicts the color indices for the rest of the image, recreating it with the specified number of colors (see Supplementary methods). We recreated the original MoSe2 images with 5, 20, and 256 colors to examine which resolution produced the most effective and generalizable models. Images were not recreated with less than five colors because the resulting images would consist of only background colors and not show the small flake in the original image. Images recreated with 20 colors appeared almost indistinguishable from the original while still greatly decreasing noise. To mimic an unquantized image, we recreated images with 256 color clusters. We compare the accuracies of the tree-based algorithms and CNNs on datasets of our images recreated with 5 and 20 colors. We also compare the tree-based algorithms' performance on our images recreated with 256 colors to the CNNs on the unquantized images (it is not necessary to perform quantization for CNN classification).

After processing the optical images, we employ tree-based and deep learning algorithms for their classification. Tree-based algorithms are a family of supervised machine learning that perform classification or regression based on the value of the features of the tree-like structure it constructs. A tree consists of an initial root node, decision nodes that indicate if the input image contains a 2D flake or not, and childless leaf nodes (or terminal nodes) where a target variable class or value is assigned25. Decision trees various advantages include the ability to successfully model complex interactions with discreet and continuous attributes, high generalizability, robustness to predictor variable outliers, and an easily interpreted decision-making process26,27. These attributes motivate the coupling of tree-based algorithms and optical microscopy for the accelerated identification of 2D materials. Specifically, we employ decision trees along with ensemble classifiers, such as random forests and gradient boosted decision trees, for improved prediction accuracies and smoother classification boundaries28,29,30.

The features of the single and ensemble trees mimic the physical method of using color contrast for identifying graphene crystallites against a thick background. The flakes are sufficiently thin so that their interference color will differ from an empty wafer, creating a visible optical contrast for identification11. We calculate an analogous color contrast for each input image. The tree-based methods then use this color contrast data to make their decisions and classify images.

This color contrast for the tree-based methods is calculated from the 2D matrix representation of the input images as follows. The 2D matrix representation of the input image is fed to the quantization algorithm which recreates the image with the specified number of colors. We then calculate the color difference, based on RGB color codes, between every combination of color clusters to model optical contrast. These differences are sorted into different color contrast ranges which encompass data extrema. To prevent model overfitting, especially for the ensemble classifiers, only three relevant color contrast ranges were chosen for training and testing the models: the lowest range, a middle range representative of the color contrast between a flake and background material, and the highest range (see Supplementary methods).This list of the number of color differences in each range is what the tree-based methods use for classification.

Once these features are calculated, we employed a k-fold cross-validation grid search to determine the best values for each estimators hyperparameters. The k-fold cross-validationan iterative process that divides the train data into k partitionsuses one partition for validation (testing) and the remaining k1 for training during each iteration31. For each tree-based method, the estimator with the combination of hyperparameters which produces the highest accuracy on the test data was selected (see Supplementary methods). We employed a five-fold cross-validation with a standard 75/25 train/test split. After finetuning the decision trees hyperparameters with k-fold cross-validation, we produced visualizations of the estimator to evaluate the physical nature of its decisions. The gradient boosted decision tree and random forest estimators represent ensembles of decision trees so the overall nature of their decisions can be extrapolated from a visualization of a single decision tree since they all use the same inherently physical features.

Along with the tree-based methods, we also examined deep learning algorithms. Recently, deep neural networks, which learn more flexible latent representations with successive layers of abstraction, have shown great success on a variety of tasks including object recognition32,33. Deep convolutional neural networks take an image as input and output a class label or other types of results depending on the goal of the task. During the feed forward step, a sequence of convolution and pooling operations are applied to the image to extract visuals. The CNN model we employ is a ResNet1834, and we train new networks from scratch by initializing parameters with uniform random variables35 due to the lack of public neural networks pre-trained on similar data. The training of ResNet18 is as follows. We used 75% original images and all their augmented images as the training. This can further be split into training and validation sets when tuning hyper-parameters. We used a small batch size of 4 and run 50 epochs using stochastic gradient descent method with momentum36. We used a learning rate of 0.01 and momentum factor of 0.9. Various efforts work to produce accurate visualizations of the inner layers of CNNs including Grad-CAM which we employed. Grad-CAM does not give a complete visualization of the CNNs as it only uses information from the last convolutional layer of the CNN. However, this last convolutional layer is expected to have the best trade-off between high-level semantics and spatial information rendering Grad-CAMs successful in visualizing what CNNs use for decisions22.

View original post here:

Physically informed machine-learning algorithms for the ... - Nature.com

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Machine Learning Tidies Up the Cosmos – Universe Today

Posted: at 12:13 am


without comments

Amanda Morris, a press release writer at Northwestern University, describes an important astronomical effect in terms entertaining enough to be worth reposting here: The cosmos would look a lot better if the Earths atmosphere wasnt photobombing it all the time. Thats certainly one way to describe the airs effect on astronomical observations, and its annoying enough to astronomers that they constantly have to correct for distortions from the Earths atmosphere, even at the most advanced observatories at the highest altitudes. Now a team from Northwestern and Tsinghua Universities have developed an AI-based tool to allow astronomers to automatically remove the blurring effect of the Earths atmosphere from pictures taken for their research.

Dr. Emma Alexander and her student Tianao Li developed the technique in the Bio Inspired Vision Lab, a part of Northwesterns engineering school, though Li was a visiting undergraduate from Tsinghua University in Beijing. Dr. Alexander realized that accuracy was an essential part of scientific imaging, but astronomers had a tough time as their work was constantly being photobombed, as Ms. Morris put it, by the atmosphere.

Weve spent plenty of time in articles discussing the difficulties of seeing and the distortion effect that air brings to astronomical pictures, so we wont rehash that here. But its worth looking at the details of this new technique, which could save astronomers significant amounts of time either chasing bad data or deblurring their own images.

Using a technique known as optimization and a more commonly known AI technique called deep learning, the researchers developed an algorithm that could successfully deblur an image with less error than both classic and modern methods. This resulted in crisper images that were both better scientifically but also more visually appealing. However, Dr. Alexander notes that was simply a happy side effect of their work to try to improve the science.

To train and test their algorithm, the team worked with simulated data developed by the team responsible for the upcoming Vera C Rubin observatory, which is set to be one of the worlds most powerful ground-based telescopes when it begins operations next year. Utilizing the simulated data as a training set allowed the Northwestern researchers to get a head start on testing their algorithm ahead of the observatorys opening but also tweak it to make it well-suited for use with what will arguably be one of the most important observatories of the coming decades.

Besides that usefulness, the team also decided to make the project open-source. They have released a version on Github, so programmers and astronomers alike can pull the code, tweak it to their own specific needs, and even contribute to a set of tutorials the team developed that could be utilized on almost any data from a ground-based telescope. One of the beauties of algorithms like that is they can easily remove photobombers even if they are less substantive than most.

Learn More:Northwestern AI algorithm unblurs the cosmosLi & Alexander Galaxy Image Deconvolution for Weak Gravitational Lensing with Unrolled Plug-and-Play ADMMUT Telescopes Laser Pointer Clarifies Blurry SkiesUT A Supercomputer Gives Better Focus to Blurry Radio Images

Lead Image: Different phases of deblurring the algorithm applies to a galaxy. Original image is in the top left, final image is the bottom right.Credit Li & Alexander

Like Loading...

See original here:

Machine Learning Tidies Up the Cosmos - Universe Today

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Distinguishing between Deep Learning and Neural Networks in … – NASSCOM Community

Posted: at 12:13 am


without comments

What are the Differences between Deep Learning and Neural Networks in Machine Learning?

In recent years the advancement of Artificial Intelligence technology has made people familiar with the terms Machine Learning, Deep learning, and Neural networks. There are numerous applications of Deep Learning and Neural Networks in Machine learning.

Deep learning and Neural networks analyze complex datasets and accomplish high accuracy in tasks that classical algorithms find challenging. These are most suitable for handling unstructured and unlabeled data. Most people assume that terms like deep learning, neural networks, and machine learning are similar because of their deeply interconnected nature. However, Deep learning and Neural networks in machine learning are unique and perform different useful functions.

Deep learning and Neural Networks are sub-branch of machine learning that play a prominent role in developing machine learning algorithms that automate human activities. In this article, you will learn about Deep Learning and Neural Networks in Machine learning.

Neural networks are designed to imitate the human brain using machine learning algorithms. A neural network works the way biological neurons work; neural network units in artificial intelligence are called Artificial Neurons.

Artificial Neural Network(ANN) comprises three interconnected layers: the input layer, the hidden layer, and the output layer. The input layer receives the raw data and processes it, then it is passed on to hidden layers, and at the end, the processed output data reaches the output layer.

The neural network algorithms cluster, classify and label the data through machine perception. They are mainly designed to identify numerical patterns in vector data that can be converted into real-world data like images, audio, texts, time series, etc.

Deep learning is a subset of machine learning designed to imitate how a human brain processes data. It creates patterns similar to the human brain that helps in decision-making. Deep learning can learn from structures and unstructured data in a hierarchical manner.

Deep learning consists of multiple hidden layers of nodes called Deep neural networks or Deep Learning systems. Deep neural networks are used to train with complex data and predict based on data patterns. Convolutional Neural Networks, Recurrent Neural Networks, Deep neural networks, and Belief Networks are some examples of deep learning in machine learning architecture.

PARAMETER

DEEP LEARNING

NEURAL NETWORK

Definition

It is a machine learning architecture consisting of multiple artificial neural networks (hidden layers) for featured extraction and transformation.

It is an ML structure comprising computational units called Artificial Neurons designed to mimic the human brain.

Structure

The components of deep learning include:-

The components of the neural network include:-

PARAMETER

DEEP LEARNING

NEURAL NETWORK

Architecture

The deep learning model architecture consists of 3 types:-

The neural network model architecture consists of:-

Time & Accuracy

It takes more time to train deep learning models, but they achieve high accuracy.

It takes less time to train neural networks and features a low accuracy rate.

Performance

Deep learning models perform tasks faster and more efficiently than neural networks

Neural Networks perform poorly compared to deep learning.

Applications

Various applications of Deep Learning:-

Various applications of Neural Networks:-

Deep learning and neural networks are popular algorithms in machine learning architecture because of their ability to perform different tasks efficiently. On a surface level, deep learning and neural networks seem similar, and now we have seen the differences between these two in this blog.

Deep learning and Neural networks have complex architectures to learn. To distinguish more about deep learning and neural network in machine learning, one must learn more about machine learning algorithms. If you are confused about how to learn about machine learning algorithms, you should check out Advanced Artificial Intelligence and Machine Learning for in-depth learning.

View original post here:

Distinguishing between Deep Learning and Neural Networks in ... - NASSCOM Community

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Top 10 Deep Learning Algorithms You Should Be Aware of in 2023 – Analytics Insight

Posted: at 12:13 am


without comments

Here are the top 10 deep learning algorithms you should be aware of in the year 2023

Deep learning has become extremely popular in scientific computing, and businesses that deal with complicated issues frequently employ its techniques. To carry out particular tasks, all deep learning algorithms employ various kinds of neural networks. To simulate the human brain, this article looks at key artificial neural networks and how deep learning algorithms operate.

Artificial neural networks are used in deep learning to carry out complex calculations on vast volumes of data. It is a form of artificial intelligence that is based on how the human brain is organized and functions. Deep learning methods are used to train machines by teaching them from examples. Deep learning is frequently used in sectors like healthcare, eCommerce, entertainment, and advertising. Here are the top 10 deep learning algorithms you should be aware of in 2023.

To handle complex problems, deep learning algorithms need a lot of processing power and data. They can operate with nearly any type of data. Lets now take a closer look at the top 10 deep learning algorithms to be aware of in 2023.

CNNs, also known as ConvNets, have multiple layers and are mostly used for object detection and image processing. Yann LeCun built the original CNN in 1988, while it was still known as LeNet. It was used to recognize characters like ZIP codes and numerals. CNNs are used in the identification of satellite photographs, the processing of medical imaging, the forecasting of time series, and the detection of anomalies.

DBNs are generative models made up of several layers of latent, stochastic variables. Latent variables, often called hidden units, are characterized by binary values. Each RBM layer in a DBN can communicate with both the layer above it and the layer below it because there are connections between the layers of a stack of Boltzmann machines. For image, video, and motion-capture data recognition, Deep Belief Networks (DBNs) are employed.

The outputs from the LSTM can be sent as inputs to the current phase thanks to RNNs connections that form directed cycles. Due to its internal memory, the LSTMs output can remember prior inputs and is used as an input in the current phase. Natural language processing, time series analysis, handwriting recognition, and machine translation are all common applications for RNNs.

Deep learning generative algorithms called GANs produce new data instances that mimic the training data. GAN is made up of two components: a generator that learns to generate fake data and a discriminator that incorporates the false data into its learning process.

Over time, GANs have become more often used. They can be used in dark-matter studies to simulate gravitational lensing and improve astronomy images. Video game developers utilize GANs to reproduce low-resolution, 2D textures from vintage games in 4K or higher resolutions by employing image training.

Recurrent neural networks (RNNs) with LSTMs can learn and remember long-term dependencies. The default behavior is to recall past knowledge for extended periods.

Over time, LSTMs preserve information. Due to their ability to recall prior inputs, they are helpful in time-series prediction. In LSTMs, four interacting layers connect in a chain-like structure to communicate especially. LSTMs are frequently employed for voice recognition, music creation, and drug research in addition to time-series predictions.

Radial basis functions are a unique class of feedforward neural networks (RBFNs) that are used as activation functions. They typically have an input layer, a hidden layer, and an output layer and are used for classification, regression, and time-series prediction.

SOMs, created by Professor Teuvo Kohonen, provide data visualization by using self-organizing artificial neural networks to condense the dimensions of the data. Data visualization makes an effort to address the issue that high-dimensional data is difficult for humans to see. SOMs are developed to aid people in comprehending this highly dimensional data.

RBMs are neural networks that can learn from a probability distribution across a collection of inputs; they were created by Geoffrey Hinton. classification, Dimensionality reduction, regression, feature learning, collaborative filtering, and topic modeling are all performed with this deep learning technique. The fundamental units of DBNs are RBMs.

A particular kind of feedforward neural network called an autoencoder has identical input and output. Autoencoders were created by Geoffrey Hinton in the 1980s to address issues with unsupervised learning. The data is replicated from the input layer to the output layer by these trained neural networks. Image processing, popularity forecasting, and drug development are just a few applications for autoencoders.

MLPs are a type of feedforward neural network made up of multiple layers of perceptrons with activation functions. A completely coupled input layer and an output layer make up MLPs. They can be used to create speech recognition, picture recognition, and machine translation software since they have the same number of input and output layers but may have several hidden layers.

See original here:

Top 10 Deep Learning Algorithms You Should Be Aware of in 2023 - Analytics Insight

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

The week in AI: OpenAI attracts deep-pocketed rivals in Anthropic and Musk – TechCrunch

Posted: at 12:13 am


without comments

Image Credits: aap Arriens/NurPhoto / Getty Images

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, heres a handy roundup of the last weeks stories in the world of machine learning, along with notable research and experiments we didnt cover on their own.

The biggest news of the last week (we politely withdraw our Anthropic story from consideration) was the announcement of Bedrock, Amazons service that provides a way to build generative AI apps via pretrained models from startups including AI21 Labs,AnthropicandStability AI. Currently available in limited preview, Bedrock also offers access to Titan FMs (foundation models), a family of AI models trained in-house by Amazon.

It makes perfect sense that Amazon would want to have a horse in the generative AI race. After all, the market for AI systems that create text, audio, speech and more could be worth more than $100 billion by 2030, according to Grand View Research.

But Amazon has a motive beyond nabbing a slice of a growing new market.

In a recent Motley Fool piece, TImothy Green presented compelling evidence that Amazons cloud business could be slowing, The company reported 27% year-over-year revenue growth for its cloud services in Q3 2022, but the uptick slowed to a mid-20% rate by the tail-end of the quarter. Meanwhile, operating margin for Amazons cloud division was down 4 percentage points year over year in the same quarter, suggesting that Amazon expanded too quickly.

Amazon clearly has high hopes for Bedrock, going so far as to train the aforementioned in-house models ahead of the launch which was likely not an insignificant investment. And lest anyone cast doubt on the companys seriousness about generative AI, Amazon hasnt put all of its eggs in one basket. It this week made CodeWhisperer, its system that generates code from text prompts, free for individual developers.

So, will Amazon capture a meaningful piece of the generative AI space and, in the process, reinvigorate its cloud business? Its a lot to hope for especially considering the techs inherent risks. Time will tell, ultimately, as the dust settles in generative AI and competitors large and small emerge.

Here are the other AI headlines of note from the past few days:

Meta open-sourced a popular experiment that let people animate drawings of people, however crude they were. Its one of those unexpected applications of the tech that is both delightful yet totally trivial. Still, people liked it so much that Meta is letting the code run free so anyone can build it into something.

Another Meta experiment, called Segment Anything, made a surprisingly large splash at all. LLMs are so hot right now that its easy to forget about computer vision and even then, a specific part of the system that most people dont think about. But segmentation (identifying and outlining objects) is an incredibly important piece of any robot application, and as AI continues to infiltrate the real world its more important than ever that it can well, segment anything.

Professor Stuart Russell has graced the TechCrunch stage before, but our half-hour conversations only scratch the surface of the field. Fortunately the man routinely gives lectures and talks and classes on the topic, which due to his long familiarity with it are very grounded and interesting, even if they have provocative names like How not to let AI destroy the world.

You should check out this recent presentation, introduced by another TC friend, Ken Goldberg:

Read more:

The week in AI: OpenAI attracts deep-pocketed rivals in Anthropic and Musk - TechCrunch

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Using AI in electronic medical records to save the lives of children – The Columbus Dispatch

Posted: at 12:13 am


without comments

Abbie (Roth) Miller| Special to The Columbus Dispatch | USA TODAY network

Artificial intelligence, including machine learning, is everywhere these days. From news headlines to talk show monologues, it seems like everyone is talking about artificial intelligence (AI) and how it is rapidly changing the world around us.

Machine learning is a type of AI that uses computer systems that can learn and adapt without exact instructions. They use algorithms and statistical models to analyze and make inferences based on patterns in data. Many forms of AI that we use regularly, such as facial recognition, product recommendations and spam filtering, are based on machine learning.

More: Paving future solutions for congenital heart disease | Pediatric Research

At Nationwide Childrens Hospital, experts in critical care, hospital medicine, data science and informatics recently published a machine learning tool that identifies children at risk for deterioration. In hospital settings, deterioration refers to a patient getting worse and having a higher risk of morbidity or mortality.

A year and a half after the team implemented the tool, deterioration events were down 77% compared to expected rates.

The tool is called the Deterioration Risk Index (DRI). It is trained on disease-specific groups: structural heart defects, cancer and general (neither cancer nor heart defect). By training the algorithm for each subpopulation, the research team improved the accuracy of the tool.

A lot of factors, including changing lab values, medications, medical history, nurse observations and more, come together to determine a patients risk of deterioration. Because the DRI is integrated into the electronic medical record, the algorithm can take all the data and analyze it in real time. It sounds an alarm if a patient becomes high risk for deterioration, triggering the action and attention of the care team. To promote adoption of the DRI, the team integrated the tool into existing hospital emergency response workflows. When an alert sounds, the care team responds with a patient assessment and huddle at the bedside to develop a risk mitigation and escalation plan for the identified patient.

More: Genomic medicine offers diagnostic hope for people with rare diseases | Pediatric Research

Many algorithms have been developed to predict risk and improve clinical outcomes. But the majority dont make it from the computer to the clinic. According to the DRI team, collaboration and transparency were key to making the DRI work in the real world. The tool was in development for more than five years. During that time, the team met with clinical units and demonstrated the tool in its various stages of development. In those meetings, the care teams asked questions and provided feedback.

Perhaps most importantly, the tool was built with full transparency about how it works. The DRI is not a black box like some machine learning or AI tools that have made headlines recently. The team can show clinicians what data goes into the algorithm and how the algorithm evaluates it.

More: Insights lead to new guidelines for children with cerebral palsy | Pediatric Research

The DRI team has also published the full algorithm in its report in the journal Pediatric Critical Care Medicine. Using this information, other hospitals can retrain the algorithm on their own data to help improve care for children at their hospital.

This project is just one example of how machine learning and AI are showing up in health care and research. It is also a great example of how collaboration and transparency can help us make the most of these new tools.

Abbie(Roth) Miller is the managing editor for Pediatrics Nationwide and manager for science and medical content at Nationwide Childrens Hospital.

Abbie.Roth@nationwidechildrens.org

Visit link:

Using AI in electronic medical records to save the lives of children - The Columbus Dispatch

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning


Page 4«..3456..1020..»



matomo tracker