Page 5«..4567..1020..»

Archive for the ‘Machine Learning’ Category

Automated Machine Learning with Python: A Case Study – KDnuggets

Posted: April 17, 2023 at 12:13 am


without comments

In todays world, all organizations want to use Machine learning to analyze the data they generate daily from the users. With the help of a machine or deep learning algorithms, they can analyze the data. Afterwards, they can make the prediction of testing data in the production environment. But suppose we start following the mentioned process. In that case, we may face problems such as building and training machine learning models since this is time-consuming and requires expertise in domains like programming, statistics, data science, etc.

So, to overcome such challenges, Automated Machine Learning (AutoML) comes into the picture, which emerged as one of the most popular solutions that can automate many aspects of the machine learning pipeline. So, in this article, we will discuss AutoML with Python through a real-life case study on the Prediction of heart disease.

We can easily observe that problem-related to the heart are the major cause of death worldwide. The only way to reduce such types of impact is to detect the disease early with some of the automated methods so that less time will be consumed there and, after that, take some prevention measures to reduce its effect. So, by keeping this problem in mind, we will explore one of the datasets related to medical patient records to build a machine-learning model from which we can predict the likelihood or probability of a patient with heart disease. This type of solution can easily be applied in hospitals to check so doctors can provide some treatments as soon as possible.

The complete model pipeline we followed in this case study is shown below.

Step-1: Before starting to implement, let's import the required libraries, including NumPy for matrix manipulation, Pandas for data analysis, and Matplotlib for Data Visualization.

Step-2: After importing all the required libraries in the above step, we will now try to load our dataset while utilizing the Pandas data frame to store that in an optimized manner, as they are much more efficient in terms of both space and time complexity compared to other data structures like a linked list, arrays, trees, etc.

Further, we can perform Data preprocessing to prepare the data for further modelling and generalization. To download the dataset which we are using here, you can easily refer to the link.

Step-3: After preparing the data for the machine learning model, we will use one of the famous automated machine learning libraries called H2O.ai, which helps us create and train the model.

The main benefit of this platform is that it provides high-level API from which we can easily automate many aspects of the pipeline, including Feature Engineering, Model selection, Data Cleaning, Hyperparameter Tuning, etc., which drastically the time required to train the machine learning model for any of the data science projects.

Step-4: Now, to build the model, we will use the API of the H2O.ai library, and to use this, we have to specify the type of problem, whether it is a regression problem or a classification problem, or some other type with the target variable mentioned. Then, automatically this library chooses the best model for the given problem statement, including algorithms such as Support Vector Machines, Decision Trees, Deep neural networks, etc.

Step-5: After finalizing the best model from a set of algorithms, the most critical task is fine-tuning our model based on the hyperparameters involved. This tuning process involved many techniques, such as Grid-search Cross Validation, etc., which allowed for finding the best set of hyperparameters for the given problem.

Step-6: Now, the final task is to check the models performance, using evaluation metrics such as Confusion matrix, Precision, recall, etc., for classification problems and MSE, MAE, RMSE, and R-square, for regression models so that we can find some inference of our models working in the production environment.

Step-7: Finally, we will plot the ROC curve which shows the graph between false positive rate (which means that our model is predicting the wrong result compare to the actual and model predicts the positive class, where it belongs to the negative class), and false negative rate(which means that our model is predicting the wrong result compare to the actual and model predicts the negative class, where it belongs to the positive class) and also print the confusion matrix and eventually our model prediction and evaluation on the test data is completed. Then we will shut down our H2O.

You can access the notebook of the mentioned code from here.

To conclude this article, we have explored the different aspects of one of the most popular platforms which automate the whole process of machine learning or data science tasks, through which we can easily create and train machine learning models using the python programming language and also we have covered one of the famous case studies of heart disease prediction, which enhances the understanding on how to use such platforms effectively. Using such platforms, machine learning pipelines can be easily optimized, saving the engineers time in the organization and reducing system latency and resource utilization such as GPU and CPU cores, which are easily accessible to a large audience.Aryan Garg is a B.Tech. Electrical Engineering student, currently in the final year of his undergrad. His interest lies in the field of Web Development and Machine Learning. He have pursued this interest and am eager to work more in these directions.

Go here to see the original:

Automated Machine Learning with Python: A Case Study - KDnuggets

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

This app gave my standard iPhone camera a Pro upgrade here’s … – Laptop Mag

Posted: at 12:13 am


without comments

On non-pro iPhones, Apple omits the telephoto lens. This means that if youd like to take closeup pictures with iPhone 14, you must either move yourself or live with an artificially zoomed, subpar shot with fuzzy details. In fact, it barely qualifies as zoom since all your iPhone does is crop the scene youre zooming into from a larger image. Can machine learning help?

Bringing machine learning to the camera app has worked for several companies like Google and Samsung. Both use it to both supercharge their phones telephoto cameras, allowing users to zoom up to 100x while improving the quality, too. The Google Pixel 7, for example, which has no physical zoom lens, comes equipped with a technology called Super Res Zoom that upscales digitally zoomed photos and produces outcomes similar to the ones a dedicated 2x telephoto camera would capture.

Halide (opens in new tab), a paid pro-level camera app, wants to bring these capabilities to iPhone.

Halide's latest update offers a Neural Telephoto mode, which uses machine learning to capture crisper, cleaner digitally zoomed pictures for non-Pro iPhone users. It works on the iPhone SE up to the latest iPhone 14. It relies on Apples built-in Neural Engine so that you dont have to wait for the Halide app to apply its machine-learning algorithms.

The Halide team says the new Neural Telephoto feature runs on the same tech that powers the apps ability to replicate another iPhone Pro exclusive perk: macro photography, which we found effective at clicking closeup shots on non-pro iPhones.

Halides machine-learning model is trained with millions of pictures, teaching it to learn and spot the parts of a low-quality picture. After discovering low-res aspects, it can enhance them without overmanipulating the photo. For example, if youre trying to zoom into a flower, it knows what its borders should look like, and consequently, the app uses that information to refine the finer details.

Apples digital zoom is notoriously poor and the differences show in results. Ive been testing Halides new Neural Telephoto mode for a few days now, and no matter the lighting condition, its 2x zoom consistently captured sharper and better-contrasted shots. Though many of these differences wont be clear until you inspect them on a larger screen, they can feel significant if youre planning to further edit the image or print it.

When I clicked a 2x zoomed-in picture of a cactus basking in the sun on my desk, for example, my iPhone 13 minis default camera app couldnt handle the sunlights hue and oversaturated it the colors began to spill outside their bounds. In the embedded picture, you can see that the cactus green appears on the blue pots borders. Similarly, the rock next to it has a glowing green haze around it. The Halide shot didnt face these issues, and although it seems a little less bright, it was true to the scene.

In low light as well, 2x shots taken on the iPhones native camera app often feature watercolor-y shades with fuzzy borders as evident in the tuk-tuk photo shown below, while Halide keeps the outlines and focus intact. Another highlight of Halide is that when you take a closeup shot, it saves both the 2x enhanced JPEG file and the original 1x-zoom RAW file so that you still have a usable picture in case the zoomed-in one is subpar.

Getting into the Neural Telephoto mode on Halide is fairly straightforward, too. All you have to do is fire up the app and touch the 1x button at the bottom right corner, and it will automatically jump directly into the 2x mode.

Halide agrees this still is no match for a physical telephoto lens, and I concur. Although it edges out the default camera in some complex scenarios, the differences are negligible in the rest, and oftentimes, its enhanced shots looked even more artificial -- as if someone had maxed out the sharpness toggle on a photo-editing app. So you will have to decide how much a better 2x digital zoom matters because the app isnt free. You can try Halide for free for a week before paying $2.99 monthly (or $11.99 yearly). Alternatively, you can pay $59.99 for a lifetime license.

Halides cost, without a doubt, is steep, but the startup frequently releases major updates like the macro mode that make the package worthwhile. In addition, it allows you to customize a range of other pro settings that the default app lacks, including the shutter speed, RAW files, and manual focus.

Today's best Apple iPhone 14 deals

See the original post:

This app gave my standard iPhone camera a Pro upgrade here's ... - Laptop Mag

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Using Machine Learning To Increase Yield And Lower Packaging … – SemiEngineering

Posted: at 12:13 am


without comments

Packaging is becoming more and more challenging and costly. Whether the reason is substrate shortages or the increased complexity of packages themselves, outsourced semiconductor assembly and test (OSAT) houses have to spend more money, more time and more resources on assembly and testing. As such, one of the more important challenges facing OSATs today is managing die that pass testing at the fab level but fail during the final package test.

But first, lets take a step back in the process and talk about the front-end. A semiconductor fab will produce hundreds of wafers per week, and these wafers are verified by product testing programs. The ones that pass are sent to an OSAT for packaging and final testing. Any units that fail at the final testing stage are discarded, and the money and time spent at the OSAT dicing, packaging and testing the failed units is wasted (figure 1).

Fig. 1: The process from fab to OSAT.

According to one estimate, based on the price of a 5nm wafer for a high-end smartphone, the cost of package assembly and testing is close to 30% of the total chip cost (Table 1). Given this high percentage (30%), it is considerably more cost-effective for an OSAT to only receive wafers that are predicted to pass the final package test. This ensures fewer rejects during the final package testing step, minimized costs, and more product being shipped out. Machine learning could offer manufacturers a way to accomplish this.

Table 1: Estimated breakdown of the cost of a chip for a high-end smartphone.

Using traditional methods, an engineer obtains inline metrology/wafer electrical test results for known good wafers that pass the final package test. The engineer then conducts a correlation analysis using a yield management software statistics package to determine which parameters and factors have the highest correlation to the final test yield. Using these parameters, the engineer then performs a regression fit, and a linear/non-linear model is generated. In addition, the model set forth by the yield management software is validated with new data. However, this is not a hands-off process. A periodic manual review of the model is needed.

Machine learning takes a different approach. In contrast to the previously mentioned method, which places greater emphasis on finding the model that best explains the final package test data, an approach utilizing machine learning capabilities emphasizes a models predictive ability. Due to the limited capacity of OSATs, a machine learning model trained with metrology and product testing data at the fab level and final test package data at the OSAT level creates representative results for the final package test.

With the deployment of a machine learning model predicting the final test yield of wafers at the OSAT, bad wafers will be automatically tagged at the fab in a manufacturing execution system and given an assigned wafer grade of last-to-ship (LTS). Fab real-time dispatching will move wafers with the assigned wafer grade to an LTS wafer bank, while wafers that meet the passing criteria of the machine learning model will be shipped to the OSAT, thus ensuring only good parts are sent to the packaging house for dicing and packaging. Moreover, additional production data would be used to validate the machine learning models predictions, with the end result being increased confidence in the model. A blind test can even examine specific critical parts of a wafer.

The machine learning approach also offers several advantages to more traditional approaches. This model is inherently tolerant of out-of-control conditions, trends and patterns are easily identified, the results can be improved with more data, and perhaps most significantly, no human intervention is needed.

Unfortunately, there are downsides. A large volume of data is needed for a machine learning model to make accurate predictions, but while more data is always welcome, this approach is not ideal for new products or R&D scenarios. In addition, this machine learning approach requires significant allocations of time and resources, and that means more compute power and more time to process complete datasets.

Furthermore, questions will need to be asked about the quality of the algorithm being used. Perhaps it is not the right model and, as a result, will not be able to deliver the correct results. Or perhaps the reasoning for the algorithms predictions are difficult to understand. Simply put: How does the algorithm decide which wafers are, in fact, good and which will be marked Last to Ship? And then there is the matter that incorrect or incomplete data will deliver poor results. Or as the saying goes, garbage in, garbage out.

The early detection and prediction of only good products shipping to OSATs has become increasingly critical, in part because the testing of semiconductor parts is the most expensive part of the manufacturing flow. By only testing good parts through the creation of a highly leveraged yield/operations management platform and machine learning, OSAT houses are able to increase capital utilization and return on investment, thus ensuring cost effectiveness and a continuous supply of finished goods to end customers. While this is one example of the effectiveness of machine learning models, there is so much more to learn about how such approaches can increase yield and lower costs for OSATs.

Read the original:

Using Machine Learning To Increase Yield And Lower Packaging ... - SemiEngineering

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

10 TensorFlow Courses to Get Started with AI & Machine Learning – Fordham Ram

Posted: at 12:13 am


without comments

Looking for ways to improve your TensorFlow machine learning skills?

As TensorFlow gains popularity, it has become imperative for aspiring data scientists and machine learning engineers to learn this open-source software library for dataflow and differentiable programming. However, finding the rightTensorFlow course that suits your needs and budget can be tricky.

In this article, we have rounded up the top 10 online free and paid TensorFlow courses that will help you master this powerful machine learning framework.

Lets dive into TensorFlow and see which of our top 10 picks will help you take your machine-learning skills to the next level.

This course from Udacity is available free of cost. The course has 4 modules, each teaching you how to use models from TF Lite in different applications. This course will teach you everything you need to know to use TF Lite for Internet of Things devices, Raspberry Pi, and more.

The course starts with an overview of TensorFlow Lite, then moves on to:

This course is ideal for people proficient in Python, iOS, Swift, or Linux.

Duration: 2 months

Price: Free

Certificate of Completion: No

With over 91.534 enrolled students and thousands of positive reviews, this Udemy course is one of the best-selling TensorFlow courses. This course was created by Jos Portilla. She is famous for her record-breaking Udemy course, The Complete Python 3 Bootcamp, with over 1.5 million students enrolled in it.

As you progress through this course, you will learn to use TensorFlow for various tasks, including image classification with Convolutional Neural Networks (CNN). Youll also learn how to design your own neural network from scratch and analyze time series.

Overall, this course is excellent for learning TensorFlow fundamentals using Python. The course covers the basics of TensorFlow and more and does not require any prior knowledge of Machine Learning.

Duration: 14 hrs

Price: Paid

Certificate of Completion: Yes

TensorFlow: Intro to TensorFlow for Deep Learning is third in our list of free TensorFlow courses one should definitely check out. This course includes a total of 10 modules. In the first part of the course, Dr. Sebastian Thrun, co-founder of Udacity, gives an interview about machine learning and Udacity.

Initially, youll learn about the MNIST fashion dataset. Then, as you progress through the course, youll learn how to employ a DNN model that categorizes pictures using the MNIST fashion dataset.

The course covers other vital subjects, including transfer learning and forecasting time series.

This course is ideal for students who are fluent in Python and have some knowledge of linear algebra.

Duration: 2 months

Price: Free

Certificate of Completion: No

This course from Coursera is an excellent way to learn about the basics of TensorFlow. In this program, youll learn how to design and train neural networks and explore fascinating new AI and machine learning areas.

As you train a network to recognize real-world images, youll also learn how convolutions could be used to boost a networks speed. Additionally, youll train a neural network to recognize human speech with NLP systems.

Even though auditing the courses is free, certification will cost you. However, if you complete the course within 7 days of enrolling, you can claim a full refund and get a certificate.

This course is for those who already have some prior experience.

Duration: 2 months

Price: free

Certificate of Completion: Yes

This is a free Coursera course on TensorFlow introduction for AI. To get started, you must first click on Enroll for Free and sign up. Then, youll be prompted to select your preferred subscription period in a new window.

There will be a button that says Audit the Course.. By clicking on the button, it will allow you to access the course for free.

As part of the first week of this course, Andrew Ng, the instructor, will provide a brief overview. Later, there will be a discussion about what the course is all about.

The Fashion MNIST Dataset is introduced in the second Week as a context for the fundamentals of computer vision. The purpose of this section is for you to put your knowledge into practice by writing your own computer vision neural network (CVNN) code.

Those with some Python experience will benefit the most from this course.

Duration: 4 months

Price: Free

Certificate of Completion: Yes

For those seeking TensorFlow Developer Certification in 2023, TensorFlow Developer Certificate in 2023: Zero to Mastery is an excellent choice since it is comprehensive, in-depth, and top-quality.

In this online course, youll learn everything you need to know to advance from knowing zero about TensorFlow to being a fully certified member of Googles TensorFlow Certification Network, all under the guidance of Daniel Bourke, a TensorFlow Accredited Professional.

The course will involve completing exercises, carrying out experiments, and designing models for machine learning and applications under the guidance of TensorFlow Certified Expert Daniel Bourke.

By enrolling in this 64-hour course, you will learn everything you need to know about designing cutting-edge deep learning solutions and passing the TensorFlow Developer certification exam.

This course is a right fit for anyone wanting to advance from TensorFlow novice to Google Certified Professional.

Duration: 64 hrs

Price: Paid

Certificate of Completion: Yes

This is yet another high-quality course that is free to audit. This course features a five-week study schedule.

This online course will teach you how to use Tensorflow to create models for deep learning from start to finish. Youll learn via engaging in hands-on programming sessions led by an experienced instructor, where you can immediately put what youve learned into practice.

The third and fourth weeks focus on model validation, normalization, The Hub Modules for Tensorflow, etc. And the final Week is dedicated to a Project for Capstone. Students in this course will be exposed to a great deal of hands-on learning and work.

This course is ideal for those who are already familiar with Python and understand the Machine learning fundamentals.

Duration: 26 hrs

Price: Free

Certificate of Completion: No

This hands-on course introduces you to Googles cutting-edge Deep Learning framework, TensorFlow, and shows you how to use it.

This program is geared toward learners who are in a bit of a rush to get to full speed. However, it also provides in-depth segments for those interested in learning more about the theory behind things like loss functions and gradient descent methods, etc.

This course will teach you how to build Python recommendation systems with TensorFlow. As far as the course goes, it was created by Lazy Programmer, one of the best instructors on Udemy for machine learning.

Furthermore, you will create an app that predicts the stock market using Python. If you prefer hands-on learning through projects, this TensorFlow course is ideal for you.

This is a fantastic resource for those new to programming and just getting their feet wet in the fields of Data Science and Machine Learning.

Duration: 23.5 hrs

Price: Paid

Certificate of Completion: Yes

This resource is excellent for learning TensorFlow and machine learning on Google Cloud. The course offers an advanced TensorFlow environment for building robust and complex deep models using deep learning.

People who are just getting started will find this course one of the most promising. It has five modules that will teach you a lot about TensorFlow and machine learning.

A course like this is perfect for those who are just starting.

Duration: 4 months

Price: Free

Certificate of Completion: Paid Certificate

This course, developed by Hadelin de Ponteves, the Ligency I Team, and Luka Anicin, will introduce you to neural networks and TensorFlow in less than 13 hours. The course provides a more basic introduction to TensorFlow and Keras than its counterparts.

In this course, youll begin with Python syntax fundamentals, then proceed to program neural networks using TensorFlow and Googles Machine Learning framework.

A major advantage of this course is using Colab for labs and assignments. The advantage of Colab is that students have less chance to make mistakes, plus you get an excellent, shareable online portfolio of your work.

This course is intended for programmers who are already comfortable working with Python.

Duration: 13 hrs

Price: Paid

Certificate of Completion: Yes

In conclusion, weve discussed 10 online free and paid TensorFlow courses that can help you learn and improve your skills in this powerful machine-learning framework. Weve seen that there are options available for beginners and more advanced users and that some courses offer hands-on projects and real-world applications.

If youre interested in taking your TensorFlow skills to the next level, we encourage you to explore some of the courses weve covered in this post. Whether youre looking for a free introduction or a more in-depth paid course, theres something for everyone.

So dont wait enroll in one of these incredibly helpful courses today and start learning TensorFlow!

And as always, wed love to hear your thoughts and experiences in the comments below. What other TensorFlow courseshave you tried? Let us know!

Online TensorFlow courses can be suitable for beginners, but some prior knowledge of machine learning concepts can be helpful. Choosing a course that aligns with your skill level and offers clear explanations of the foundational concepts is important. Some courses may assume prior knowledge of Python programming or linear algebra, so its important to research the course requirements before enrolling.

The duration of a typical TensorFlow course can vary widely, ranging from a few weeks to several months, depending on the level of depth and complexity. The amount of time you should dedicate to learning each Week will depend on the TensorFlow course and your schedule, but most courses recommend several hours of study time per Week to make meaningful progress.

Some best practices for learning TensorFlow online include setting clear learning objectives, taking comprehensive notes, practicing coding exercises regularly, seeking help from online forums or community groups, and working on real-world projects to apply your knowledge. To ensure youre progressing and mastering the concepts, track your progress, regularly test your understanding of the material, and seek feedback from peers or instructors.

Prerequisites for online TensorFlow courses may vary, but basic programming skills and familiarity with Python are often required. A solid understanding of linear algebra and calculus can help understand the underlying mathematical concepts. Some courses may also require hardware, such as a powerful graphics processing unit (GPU), for training large-scale deep learning models. Its important to carefully review the course requirements before enrolling.

Some online TensorFlow courses offer certifications upon completion, but there are no official degrees in TensorFlow. Earning a certification can demonstrate your knowledge and proficiency in the framework, which can help advance your career in machine learning or data science. However, its important to supplement your knowledge with real-world projects and practical experience to be successful in the field.

Continued here:

10 TensorFlow Courses to Get Started with AI & Machine Learning - Fordham Ram

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

The real-world ways that businesses can harness ML – SmartCompany

Posted: at 12:13 am


without comments

Nearmap, senior director, AI systems, Mike Bewley; Deloitte, Strategy & AI, Alon Ellis; AWS ANZ, chief technologist, Rada Stanic; and SmartCompany, editor in chief, Simon Crerar.

The power of machine learning (ML) is within reach of every business. No longer the domain of organisations with data scientists and ML experts on staff, the technology is rapidly moving into the mainstream. For businesses now, the question is: what can ML do for us?

As discussed in chapter four of AWS eBook Innovate With AI/ML To Transform Your Business, ML isnt just about building the technology, its about putting existing examples to work. What were seeing is a lot of these solutions coming to market and customers are asking for them, says Simon Johnston, AWS artificial intelligence and machine learning practice lead for ANZ. Theyre like we dont want to build this technology ourselves were happy for Amazon to have it and well do a commercial contract to use this technology.

With that philosophy in mind, lets take a look at three areas of ML and the use cases within them that every business can harness, even without ML expertise.

Data-heavy documents pose a real problem for many businesses. Take a home loan application, for example. These are often very large documents that require significant data input from applicants with the potential for incorrectly-filled forms, missing data and other mistakes. Then, the application needs to be manually processed and data extracted, which is difficult (particularly where multiple types of forms or data are concerned), potentially inaccurate and time-consuming. For businesses, ML offers a simpler way forward.

Its all about reducing that time in terms of managing documents and processes, says Johnston. Its about how they can automatically speed up how these processes work from a back-of-office perspective. This is where machine learning solutions like intelligent data processing (IDP) come into play. IDPs like Textract use machine learning processes such as optical character recognition (OCR) and native language processing (NLP) to extract and interpret data from dense forms quickly and accurately, saving employee time and limiting mistakes.

The power of ML in data extraction can be seen in more than just application documents in banking. Consider these use cases:

Learn more about how you can harness the power of AI and ML with AWS eBook Innovate With AI/ML To Transform Your Business

Just like data extraction, the most impactful ML use cases are often subtle additions to a business rather than wholesale change. In the world of customer experience (sometimes called CX), ML can provide a positive improvement without the need for organisational restructure or technological overhaul. Here are two CX-focused ML use cases to consider:

ML is more than just document analysis and customer experience. As weve seen with recent breaches, keeping customer and business data safe should be everyones top priority. In fact, in chapter 5 of Innovate With AI/ML To Transform Your Business, we learned that good security is one of the foundations of effective AI.

One security-focused use case is a common point of concern for businesses: identity verification. Tools like Rekognition let businesses bypass human-led authorisation, which is time-consuming, costly and prone to human error. Using automated ML identity recognition tools lets businesses like banks, healthcare providers and ecommerce platforms quickly verify their customers and prevent unauthorised access. With ML, complex facial and identity recognition can be done instantly with a system that is always improving.

Similarly, fraud detection is integral to keeping online businesses usable for customers and profitable for organisations. Amazon Fraud Detector is one example of an ML-powered tool allowing businesses real-time fraud prevention, letting companies block fraudulent account creation, payment fraud and fake reviews. Particularly for ecommerce businesses, having an out-of-the-box solution to fraud is vital.

Read now: Leaning into AI: Keynote speakers

The rest is here:

The real-world ways that businesses can harness ML - SmartCompany

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Exploring movement optimization for a cyborg cockroach with machine learning – Tech Xplore

Posted: at 12:13 am


without comments

This article has been reviewed according to ScienceX's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

by Beijing Institute of Technology Press Co., Ltd

Scientists from Osaka University designed a cyborg cockroach and optimized its movement by utilizing machine learning-based automatic stimulation. Credit: Cyborg and Bionic Systems

Have you ever wondered why some insects like cockroaches prefer to stay or decrease movement in darkness? Some may tell you it's called photophobia, a habit deeply coded in their genes. A further question would be whether we can correct this habit of cockroaches, that is, moving in the darkness just as they move in bright backgrounds.

Scientists from Osaka University may have answered this question by converting a cockroach into a cyborg. They published their research in the journal Cyborg and Bionic Systems.

With millions of years of evolution, natural animals are endowed with outstanding capabilities to survive and thrive in hostile environments. In recent years, these animals have inspired roboticists to develop automatic machines to recapitulate part of these extinguished capabilities, that is, biologically inspired biomimetic robots.

An alternative to this path is to directly build controllable machines on these natural animals by implanting stimulation electrodes into their brains or peripheral nervous system to control their movement and even see what they see, so-called cyborgs. Among these studies, cyborg insects are attracting ever-increasing attention for their availability, simpler neuro-muscular pathways, and easier operation to intrusively stimulate their peripheral nervous system or muscles.

Cockroaches have marvelous locomotion ability, which significantly outperforms any biomimetic robots of similar size. Therefore, cyborg cockroaches equipped with such agile locomotion are suitable for search and rescue missions in unknown and unstructured environments that traditional robots can hardly access.

"Cockroaches prefer to stay in the darkened, narrow areas over the bright, spacious areas. Moreover, they tend to be active in the hotter environment," explained study author Keisuke Morishima, a roboticist from Department of Mechanical Engineering, Osaka University, "These natural behaviors will hinder the cockroaches to be utilized in unknown and under-rubble environments for search and rescue applications. It will be difficult to apply a mini live stream camera attached to them in a dark or without light areas for real-time monitoring purposes."

"This study aims to optimize cyborg cockroach movement performance," said Morishima. To this end, they proposed a machine learning-based approach that automatically detects the motion state of this cyborg cockroach via IMU measurements. If the cockroach stops or freezes in darkness or cooler environment, electrical stimulation would be applied to their brain to make it move.

"With this online detector, the stimulation is minimized to prevent the cockroaches from fatigue due to too many stimulations," said Mochammad Ariyanto, Morishima's colleague from Department of Mechanical Engineering, Osaka University.

This idea of restraining electrical stimulation to necessary circumstances, which is determined by AI algorithms via onboard measurements, is intuitively promising. "We don't have to control the cyborg like controlling a robot. They can have some extent of autonomy, which is the basis of their agile locomotion. For example, in a rescue scenario, we only need to stimulate the cockroach to turn its direction when it's walking the wrong way or move when it stops unexpectedly," said Morishima.

"Equipped with such a system, the cyborg successfully increased its average search rate and traveled distance up to 68% and 70%, respectively, while the stop time was reduced by 78%," said the study authors. "We have proven that it's feasible to apply electrical stimulation on the cockroach's cerci; it can overcome its innate habit, for example, increase movement in dark and cold environments where it normally decreases its locomotion."

"In this study, cerci were stimulated to trigger the free-walking motion of the Madagascar hissing cockroach (MHC)."

More information: Mochammad Ariyanto et al, Movement Optimization for a Cyborg Cockroach in a Bounded Space Incorporating Machine Learning, Cyborg and Bionic Systems (2023). DOI: 10.34133/cbsystems.0012

Provided by Beijing Institute of Technology Press Co., Ltd

Read more here:

Exploring movement optimization for a cyborg cockroach with machine learning - Tech Xplore

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

How Will ChatGPT Shape Business, Society and Employment? – INSEAD Knowledge

Posted: at 12:13 am


without comments

Chess grandmasterGarry Kasparov wrote in Deep Thinking that a (weak) human working with a machine, with a strong process to work together, can produce better outcomes than when AI and humans work alone, said Evgeniou. According to Kasparov, building a better process at the human-machine interface requires humans to be informed. In other words, we need to know the technology to understand its potential, limits and challenges.

Unpacking ChatGPT

ChatGPT is a specific product in a class of technologies known as large language models (LLMs) an application area of machine learning (ML), itself at the heart of modern AI. Like all ML algorithms, ChatGPT looks at a large amount of data, finds patterns namely regularities with high enough probability in the data and uses these patterns to make predictions such as about what word to generate next given the previous ones, explained Puranam.

In school, we may have sat for tests where we were shown a sequence of shapes such as a triangle, a circle, a star and a triangle, and asked to predict what comes next. In simple terms, that's what machine learning does, he said.

The term GPT is derived from the phrase generative pre-trained transformer. It is generative as it generates text as a prediction of what users are likely to find useful based on their questions or instructions. It's pre-trained by an algorithm called a transformer using a large corpus of text.

In a nutshell, said Puranam, LLMs such as ChatGPT are complex ML algorithms that find patterns in very large volumes of text generated by people in the past and use them to predict what specific users might find useful based on their inputs. The complexity is evident, with an estimated 175 billion parameters in ChatGPT and an estimated 170 trillion parameters in GPT-4, an advanced version of ChatGPT.

To appreciate the potential of LLMs such as ChatGPT, said Evgeniou, it is important to understand that they are not necessarily products, but foundation models. Since foundation models are used in different downstream applications, what we are seeing is just the tip of the iceberg.

Foundation to a myriad of applications

ChatGPT is most commonly used to synthesise or summarise text, translate text to programming language (such as R and Python) and search. In the business context, Puranam provided examples of applications such as copywriting for marketing materials, customer interaction, synthesising large legal documents, writing operational checklists and developing financial summaries.

Due to ChatGPTs ability to generate text from different viewpoints, it can widen perspectives and improve creativity potentially beyond human imagination, said Evgeniou. For example, you can generate short summaries of text such as your companys mission statement from various perspectives, say a European, American, Chinese, 10-year-old or 80-year-old person.

Its already being used in business to enhance creativity and business success: Coca-Cola, for instance, used AI effectively to engage its customers in its recent marketing campaign. But creativity is not limited only to creative fields, stressed Puranam. The technology can leverage human creativity by generating alternatives for business plans, business models and so on. However, humans ultimately need to evaluate the quality of the content generated.

In more advanced applications, Olsen stated that innovation is typically driven by fundamental and corporate research. The more AI can help in these processes, the faster we can see real innovation, just as how using AI in biomedical research has reduced the time taken for drug discovery and protein-folding predictions to a mere fraction of the time taken by a human.

Evgeniou believes that AIcan augment human intelligence, leading to the creation of new needs that we didnt even know of and creating new companies, products, markets and jobs at a much faster pace.

What does ChatGPT mean for business?

While ChatGPT brings new possibilities, we need sound processes to enable humans and AI to work together effectively.

In addition, trust is a necessary ingredient in technology adoption. But trust is a double-edged sword when users place too much trust in technology, it can lead to overconfidence in decision-making or narrative fallacy, where people make up stories based on the narratives generated by LLMs. In high-risk applications, it can even jeopardise their safety.

Trust is also associated with the question of liability, as Evgeniou noted: If professionals such as doctors, lawyers and architects make mistakes as a result of prioritising AIs decisions over their own judgement, are they culpable? Would they be covered under malpractice or professional liability insurance?

From the perspective of consumer trust and safety, the exponential growth of content made possible with technologies such as ChatGPT has made content moderation a critical issue for our online trust and safety more challenging for online platforms. Moreover, the role of AI in creating information filters and bubbles has been put in the spotlight.

Families of the Paris terrorism attack victims are suing Google for the role of its AI recommendation algorithm in allegedly promoting terrorism. The Communications Decency Act (Section 230) is being challenged in the United States Supreme Court for the first time, which raises the alarm on the potential dangers of recommendation algorithms and opens other online platforms that employ AI to litigation risks, said Evgeniou.

Talent development is another consideration. Puranam cautioned that over-reliance on LLMs can atrophy our skills, particularly in creative and critical thinking. Companies should avoid the myopic view of automating lower-end work just because technology allows for it. In some professions, you can't be a partner without having been an associate, and you can't be a full professor without having been a research assistant, he said. Therefore, automation without due consideration for talent development can disrupt the organisations talent pipeline.

Evgeniou proposed that companies put in place guidelines to ensure that AI is harnessed safely, specifying who, when and how it should be used. In AI adoption, we need to put humans in the drivers seat to monitor the behaviour of AI, he said.

Is society ready?

While some people are understandably concerned about being replaced by ChatGPT, technological unemployment hasnt happened in the last 150 years, said Olsen. AI is not expected to lead to massive unemployment in the next five to ten years, he assured, so the more relevant concern is how it would affect income distribution.

New technologies can bring about two effects: productivity and substitution. Productivity effects will only be apparent in productivity statistics over time, as economist Robert Solow observed. As for substitution effect, it affects individuals to different extents depending on their skill level.

In the 1850s, low-skill-biased technological change saw the displacement of skilled shoemakers by unskilled workers who mass produced shoes in factories. On the other hand, the skill-biased technology that enabled factory automation in the 1980s to 2010s favoured those with university degrees over low-skilled factory workers. Currently, it is unclear which group will benefit from LLMs.

At a more fundamental level, there is the question of whether LLMs can be truly unbiased and inclusive. Understanding how it learns reveals why it can be inherently biased. ML algorithms such as ChatGPT build knowledge by unsupervised learning (i.e. observing conversations), supervised learning and reinforced learning where experts train the models based on users feedback, explained Puranam and Evgeniou.

This means that ChatGPT learns from individuals who train and use it, and the machine adopts their values, views and biases on politics, society and the world at large. Therefore, while ChatGPT can be democratising, it can also be centralised depending on the experts who train it, said Puranam.

Moreover, the risk of misinformation is heightened due to the speed of content being proliferated and how content can be weaponised to threaten democracies and institutions. It is even now expected to influence election campaigns, said Evgeniou. Puranam also cautioned that people whose social lives exist only in online channels are at high risk, as they may fail to judge truth from falsehood. Olsen agreed that ChatGPT can perpetuate the views of individuals who are already siloed in their own informational bubbles online.

The panellists were cautiously optimistic and agreed on the need for appropriate management and regulation to ensure ethical and responsible use of technologies such as ChatGPT.

Learning to work together

In practice, regulation will always fall behind tech innovation. The European Union Digital Services Act to safeguard online safety fell behind as soon as it was enforced in late 2022, since it only covers online platforms such as Facebook and Google but not ChatGPT, even though the latter aggregates online content.

Similarly, although foundation models can be used in high-risk products downstream, they fall through the cracks in AI regulations.

But regulating an emerging, evolving technology across different geographical regions comes with challenges. AI algorithms adopt values from the data used to train them, which can result in different AI culture across regions. This increases the complexity of regulation, said Evgeniou. Even if regulations are the same in different parts of the world, the implementations and results will differ not only because of different legal systems, but also different value systems.

In spite of the challenges, a combination of actions by data scientists, businesses and regulatory bodies can improve tech trust and safety. Transparency and trust often go hand in hand, and it pays when businesses are transparent in their engagement with customers. For instance, they can inform customers when content is generated by ChatGPT and when customers are interacting with a machine instead of a human.

An ongoing development to ensure that AI is more aligned with human values is the field of reinforcement learning with human feedback (RLHF), said Evgeniou. By incorporating human feedback, we can try to improve the quality of the AIs output based on human values. However, according to Evgeniou, we are only at the beginning of solving the AI value alignment problem.

In the meantime, while it is proven that AI can beat a human at chess, this is not the case in all fields. As LLMs continue to evolve, all the panellists saw human-machine ensembling as a promising area to use AI to improve the quality of human thinking and identify the necessary conditions to achieve it.

See original here:

How Will ChatGPT Shape Business, Society and Employment? - INSEAD Knowledge

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Calling AI Experts: Join The Hunt For Exoplanets – Eurasia Review

Posted: at 12:13 am


without comments

Artificial Intelligence (AI) experts have been challenged to help a new space mission to investigate Earths place in the universe.

The Ariel Data Challenge 2023, which launches on 14 April, is inviting AI and machine learning experts from industry and academia to help astronomers understand planets outside our solar system, known as exoplanets.

Dr Ingo Waldmann, Associate Professor in Astrophysics, UCL (University College London) and Ariel Data Challenge lead said: AI has revolutionised many fields of science and industry in the past years. The field of exoplanets has fully arrived in the era of big-data and cutting edge AI is needed to break some of our biggest bottlenecks holding us back.

For centuries, astronomers could only glimpse the planets in our solar system but in recent years, thanks to telescopes in space, they have discovered more than 5000 planets orbiting other stars in our galaxy.

The European Space Agencys Ariel telescope will complete one of the largest-ever surveys of these planets by observing the atmospheres of around one-fifth of the known exoplanets.

Due to the large number of planets in this survey, and the expected complexity of the captured observations, Ariel mission scientists are calling for the help of the AI and machine learning community to help interpret the data.

Ariel will study the light from each exoplanets host star after it has travelled through the planets atmosphere in what is known as a spectrum. The information from these spectra can help scientists investigate the chemical makeup of the planets atmosphere and discover more about these planets and how they formed.

Scientists involved in the Ariel mission need a new method to interpret these data. Advanced machine learning techniques could help them to understand the impact of different atmospheric phenomena on the observed spectrum.

The Ariel Data Challenge calls on the AI community to investigate solutions. The competition is open from 14 April to 18t June 2023.

Participants are free to use any model, algorithm, data pre-processing technique or other tools to provide a solution. They may submit as many solutions as they like and collaborations between teams are welcomed.

This year, the competition also offers participants access to High Powered Computing resources through DiRAC, part of the UKs Science and Technology Facilities Councils computing facilities.

Kai Hou (Gordon) Yip, Postdoctoral Research Fellow at UCL and Ariel Data Challenge Lead said: With the arrival of next-generation instrumentation, astronomers are struggling to keep up with the complexity and volume of incoming exo-planetary data. The ECML-PKDD data challenge 2023 provides an excellent platform to facilitate cross-disciplinary solutions with AI experts.

Winners will be invited to present their solutions at the prestigious ECML conference. The top three winning teams will be receive sponsored tickets to ECML-PKDD in Turing or the cash equivalent.

Winners will also be invited to present their solutions to the Ariel consortium. The UK Space Agency, Centre National dEtudes Spatiales (CNES), European Research Council, UKRI Science and Technology Funding Council (STFC), European Space Agency and Europlanet Society support the competition.

For the first time, DiRAC is providing free access to GPU computing resources to selected participants. The application is open for all.

This is the fourth Ariel Machine Learning Data challenge following successful competitions in 2019, 2021 and 2022. The 2022 challenge welcomed 230 participating teams from across the world, including entrants from leading academic institutes and AI companies.

This challenge and its predecessor have taken a bite-sized aspect of a larger problem to help make exoplanet research more accessible to the machine-learning community. These challenges are not designed to solve the data analysis issues faced by the mission outright but provide a forum for new ideas, discussions and to encourage future collaborations.

More details about the competition and how to take part can be found on theAriel Data Challengewebsite.

Read the original:

Calling AI Experts: Join The Hunt For Exoplanets - Eurasia Review

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

Research reveals how Artificial Intelligence can help look for alien lifeforms on Mars and other planets – WION

Posted: at 12:13 am


without comments

Aliens have long been a fascinating subject for humans. Innumerable movies, TV series and books are proof of this allure. Our search for extra-terrestrial has eventaken us to other planets, albeit remotely. This search has progressed leaps and bounds in the last few years, but it is still in its natal stages. Global space agencies like the National Aeronautics and Space Administration (NASA) and China National Space Administration (CNSA) have in recent years sent rovers to Mars to aid this search remotely. However, the accuracy of these random searches remains low.

COMMERCIAL BREAK

SCROLL TO CONTINUE READING

To remedy this, the Search for Extraterrestrial Intelligence (SETI) Institute has been exploring the use of artificial intelligence (AI) for finding extraterrestrial life on Mars and other icy worlds.

According to a report on Space, a recent study from SETIstates that AI could be used to detect microbial life in the depths of the icy oceans on other planets.

In a paper published in Nature Astronomy, the team details how they trained a machine-learning model to scan data for signs of microbial life or other unusual features that could be indicative of alien life.

Also read |Here's how Artificial Intelligence can help modern-day Goldilocks get a good night's sleep

Using a machine learning algorithm called convolutional neural networks (CNNs) a multidisciplinary team of scientists led by SETI's Kim Warren-Rhodes has mapped sparse lifeforms on Earth.Warren-Rhodes worked alongside experts from other prestigious institutions: Michael Phillips of Johns Hopkins Applied Physics Laband Freddie Kalaitzis of the Universityof Oxford.

The system developed by them used statistical ecology and AI-detected biosignatures with up to 87.5 per cent accuracy, compared to only 10 per cent for random searches. As per the researchers, itcanpotentially reduce the search area by up to 97 per cent, making it easier for scientists to locate potential chemical traces of life.

Also read |Up, Up, and Away!: Elon Musks SpaceX to try and launch Starship, its most powerful rocket ever on Monday

For testing their system,they initiallyfocused on the sparse lifeforms that dwell in salt domes, rocks, and crystals at Salar de Pajonales at the boundary of the Chilean Atacama Desert and Altiplano.

Warren-Rhodes and his team collected over 8,000 images and 1,000 samples from Salar de Pajonales to search for photosynthetic microbes that may represent a biosignature on NASA's "ladder of life detection" for finding life beyond Earth.

The team also used drone imagery to simulate Mars Reconnaissance Orbiter's High-Resolution Imaging Experiment camera's Martian terrain images to examine the region.

They found that microbial life in the region is concentrated in biological hotspots that strongly relate to the availability of water.

Researchers suggest that the machine learning tools developed can be used in robotic planetary missions like NASA's Perseverance Rover. The tools can guide rovers towards areas with a higher probability of having traces of alien life, even if they are rare or hidden.

"With these models, we can design tailor-made roadmaps and algorithms to guide rovers to places with the highest probability of harbouring past or present life no matter how hidden or rare," explained Warren-Rhodes.

(With inputs from agencies)

WATCH WION LIVE HERE

You can now write for wionews.com and be a part of the community. Share your stories and opinions with us here.

Go here to see the original:

Research reveals how Artificial Intelligence can help look for alien lifeforms on Mars and other planets - WION

Written by admin

April 17th, 2023 at 12:13 am

Posted in Machine Learning

What we learned about AI and deep learning in 2022 – VentureBeat

Posted: December 29, 2022 at 12:20 am


without comments

Check out all the on-demand sessions from the Intelligent Security Summit here.

Its as good a time as any to discuss the implications of advances in artificial intelligence (AI). 2022 saw interesting progress in deep learning, especially in generative models. However, as the capabilities of deep learning models increase, so does the confusion surrounding them.

On the one hand, advanced models such as ChatGPT and DALL-E are displaying fascinating results and the impression of thinking and reasoning. On the other hand, they often make errors that prove they lack some of the basic elements of intelligence that humans have.

The science community is divided on what to make of these advances. At one end of the spectrum, some scientists have gone as far as saying that sophisticated models are sentient and should be attributed personhood. Others have suggested that current deep learning approaches will lead to artificial general intelligence (AGI). Meanwhile, some scientists have studied the failures of current models and are pointing out that although useful, even the most advanced deep learning systems suffer from the same kind of failures that earlier models had.

It was against this background that the online AGI Debate #3 was held on Friday, hosted by Montreal AI president Vincent Boucher and AI researcher Gary Marcus. The conference, which featured talks by scientists from different backgrounds, discussed lessons from cognitive science and neuroscience, the path to commonsense reasoning in AI, and suggestions for architectures that can help take the next step in AI.

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Deep learning approaches can provide useful tools in many domains, said linguist and cognitive scientist Noam Chomsky. Some of these applications, such as automatic transcription and text autocomplete have become tools we rely on every day.

But beyond utility, what do we learn from these approaches about cognition, thinking, in particular language? Chomsky said. [Deep learning] systems make no distinction between possible and impossible languages. The more the systems are improved the deeper the failure becomes. They will do even better with impossible languages and other systems.

This flaw is evident in systems like ChatGPT, which can produce text that is grammatically correct and consistent but logically and factually flawed. Presenters at the conference provided numerous examples of such flaws, such as large language models not being able to sort sentences based on length, making grave errors on simple logical problems, and making false and inconsistent statements.

According to Chomsky, the current approaches for advancing deep learning systems, which rely on adding training data, creating larger models, and using clever programming, will only exacerbate the mistakes that these systems make.

In short, theyre telling us nothing about language and thought, about cognition generally, or about what it is to be human or any other flights of fantasy in contemporary discussion, Chomsky said.

Marcus said that a decade after the 2012 deep learning revolution, considerable progress has been made, but some issues remain.

He laid out four key aspects of cognition that are missing from deep learning systems:

Deep neural networks will continue to make mistakes in adversarial and edge cases, said Yejin Choi, computer science professor at the University of Washington.

The real problem were facing today is that we simply do not know the depth or breadth of these adversarial or edge cases, Choi said. My haunch is that this is going to be a real challenge that a lot of people might be underestimating. The true difference between human intelligence and current AI is still so vast.

Choi said that the gap between human and artificial intelligence is caused by lack of common sense, which she described as the dark matter of language and intelligence and the unspoken rules of how the world works that influence the way people use and interpret language.

According to Choi, common sense is trivial for humans and hard for machines because obvious things are never spoken, there are endless exceptions to every rule, and there is no universal truth in commonsense matters. Its ambiguous, messy stuff, she said.

AI researcher and neuroscientist, Dileep George, emphasized the importance of mental simulation for common sense reasoning via language. Knowledge for commonsense reasoning is acquired through sensory experience, George said, and this knowledge is stored in the perceptual and motor system. We use language to probe this model and trigger simulations in the mind.

You can think of our perceptual and conceptual system as the simulator, which is acquired through our sensorimotor experience. Language is something that controls the simulation, he said.

George also questioned some of the current ideas for creating world models for AI systems. In most of these blueprints for world models, perception is a preprocessor that creates a representation on which the world model is built.

That is unlikely to work because many details of perception need to be accessed on the fly for you to be able to run the simulation, he said. Perception has to be bidirectional and has to use feedback connections to access the simulations.

While many scientists agree on the shortcomings of current AI systems, they differ on the road forward.

David Ferrucci, founder of Elemental Cognition and a former member of IBM Watson, said that we cant fulfill our vision for AI if we cant get machines to explain why they are producing the output theyre producing.

Ferruccis company is working on an AI system that integrates different modules. Machine learning models generate hypotheses based on their observations and project them onto an explicit knowledge module that ranks them. The best hypotheses are then processed by an automated reasoning module. This architecture can explain its inferences and its causal model, two features that are missing in current AI systems. The system develops its knowledge and causal models from classic deep learning approaches and interactions with humans.

AI scientist Ben Goertzel stressed that the deep neural net systems that are currently dominating the current commercial AI landscape will not make much progress toward building real AGI systems.

Goertzel, who is best known for coining the term AGI, said that enhancing current models such as GPT-3 with fact-checkers will not fix the problems that deep learning faces and will not make them capable of generalization like the human mind.

Engineering true, open-ended intelligence with general intelligence is totally possible, and there are several routes to get there, Goertzel said.

He proposed three solutions, including doing a real brain simulation; making a complex self-organizing system that is quite different from the brain; or creating a hybrid cognitive architecture that self-organizes knowledge in a self-reprogramming, self-rewriting knowledge graph controlling an embodied agent. His current initiative, the OpenCog Hyperon project, is exploring the latter approach.

Francesca Rossi, IBM fellow and AI Ethics Global Leader at the Thomas J. Watson Research Center, proposed an AI architecture that takes inspiration from cognitive science and the Thinking Fast and Slow Framework of Daniel Kahneman.

The architecture, named SlOw and Fast AI (SOFAI), uses a multi-agent approach composed of fast and slow solvers. Fast solvers rely on machine learning to solve problems. Slow solvers are more symbolic and attentive and computationally complex. There is also a metacognitive module that acts as an arbiter and decides which agent will solve the problem.Like the human brain, if the fast solver cant address a novel situation, the metacognitive module passes it on to the slow solver. This loop then retrains the fast solver to gradually learn to address these situations.

This is an architecture that is supposed to work for both autonomous systems and for supporting human decisions, Rossi said.

Jrgen Schmidhuber, scientific director of The Swiss AI Lab IDSIA and one of the pioneers of modern deep learning techniques, said that many of the problems raised about current AI systems have been addressed in systems and architectures introduced in the past decades. Schmidhuber suggested that solving these problems is a matter of computational cost and that in the future, we will be able to create deep learning systems that can do meta-learning and find new and better learning algorithms.

Jeff Clune, associate professor of computer science at the University of British Columbia, presented the idea of AI-generating algorithms.

The idea is to learn as much as possible, to bootstrap from very simple beginnings all the way through to AGI, Clune said.

Such a system has an outer loop that searches through the space of possible AI agents and ultimately produces something that is very sample-efficient and very general. The evidence that this is possible is the very expensive and inefficient algorithm of Darwinian evolution that ultimately produced the human mind, Clune said.

Clune has been discussing AI-generating algorithms since 2019, which he believes rests on three key pillars: Meta-learning architectures, meta-learning algorithms, and effective means to generate environments and data. Basically, this is a system that can constantly create, evaluate and upgrade new learning environments and algorithms.

At the AGI debate, Clune added a fourth pillar, which he described as leveraging human data.

If you watch years and years of video on agents doing that task and pretrain on that, then you can go on to learn very very difficult tasks, Clune said. Thats a really big accelerant to these efforts to try to learn as much as possible.

Learning from human-generated data is what has allowed GPT, CLIP and DALL-E to find efficient ways to generate impressive results. AI sees further by standing on the shoulders of giant datasets, Clune said.

Clune finished by predicting a 30% chance of having AGI by 2030. He also said that current deep learning paradigms with some key enhancements will be enough to achieve AGI.

Clune warned, I dont think were ready as a scientific community and as a society for AGI arriving that soon, and we need to start planning for this as soon as possible. We need to start planning now.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

See original here:

What we learned about AI and deep learning in 2022 - VentureBeat

Written by admin

December 29th, 2022 at 12:20 am

Posted in Machine Learning


Page 5«..4567..1020..»



matomo tracker