Page 3«..2345..10..»

Archive for the ‘Machine Learning’ Category

50 Latest Data Science And Analytics Jobs That Opened Last Week – Analytics India Magazine

Posted: September 20, 2020 at 10:56 pm


without comments

Despite the pandemic, data scientists remain to be one of the most in-demand jobs. Here we list down 50 latest job openings for data science and analyst positions in cities such as Bangalore, Mumbai, Hyderabad, Pune and more, from last week.

(The jobs are sorted according to the years of experience required).

Location: Hyderabad

Skills Required: Machine learning and statistical models, big data processing technologies such as Hadoop, Hive, Pig and Spark, SQL, etc.

Apply here.

Location: Bangalore

Skills Required: Mathematical modelling using biological datasets, statistical and advanced data analytics preferably using R, Python and/or JMP, hands-on experience in data modelling, data analysis and visualisation, database systems like Postgres, MySQL, SQLServer, etc.

Apply here.

Location: Bangalore

Skills Required: Quantitative analytics or data modelling, predictive modelling, machine learning, clustering and classification techniques, Python, C, C++, Java, SQL, Big Data frameworks and visualisation tools like Cassandra, Hadoop, Spark, Tableau, etc.

Apply here.

Location: Bangalore

Skills Required: Advanced analytics, machine learning, AI techniques, cloud-based Big Data technology, Python, R, SQL, etc.

Apply here.

Location: Thiruvananthapuram, Kerala

Skills Required: Data mining techniques, statistical analysis, building high-quality prediction systems, etc.

Apply here.

Location: Bangalore

Skills Required: Advanced ML, DL, AI, and mathematical modelling and optimisation techniques, Python, NLP, TensorFlow, PyTorch, Keras, etc.

Apply here.

Location: Bangalore

Skills Required: Java, Python, R, C++, machine learning, data mining, mathematical optimisation, simulations, experience in e-commerce or supply chain, computational, programming, data management skills, etc.

Apply here.

Location: Bangalore

Skills Required: Statistics, Machine Learning, programming skills in various languages such as R, Python, etc., NLP, Matlab, linear algebra, optimisation, probability theory, etc.

Apply here.

Location: Bangalore

Skills Required: Knowledge of industry trends, R&D areas and computationally intensive processes (e.g. optimisation), Qiskit, classical approaches to machine learning, etc.

Apply here.

Location: Bangalore

Skills Required: Java, C++, Python, natural language processing systems, C/C++, Java, Perl or Python, statistical language modelling, etc.

Apply here.

Location: Khed, Maharashtra

Skills Required: Statistical computer languages like R, Python, SQL, machine learning techniques, advanced statistical techniques and concepts, etc.

Apply here.

Location: Bangalore

Skills Required: Foundational algorithms in either machine learning, computer vision or deep learning, NLP, Python, etc.

Apply here.

Location: Hyderabad

Skills Required: SQL CQL, MQL, Hive, NoSQL database concepts & applications, data modelling techniques (3NF, Dimensional), Python or R or Java, statistical models and machine learning algorithms, etc.

Apply here.

Location: Anekal, Karnataka

Skills Required: Machine Learning, deep learning-based techniques, OpenCV, DLib, Computer Vision techniques, TensorFlow, Caffe, Pytorch, Keras, MXNet, Theano, etc.

Apply here.

Location: Vadodara, Gujarat

Skills Required: Large and complex data assets, design and build explorative, predictive- or prescriptive models, Python, Spark, SQL, etc.

Apply here.

Location: Remote

Skills Required: Machine Learning & AI, data science Python, R, design and develop training programs, etc.

Apply here.

Location: Bangalore

Skills Required: Integrating applications and platforms with cloud technologies (i.e. AWS), GPU acceleration (i.e. CUDA and cuDNN), Docker containers, etc.

Apply here.

Location: Bangalore

Skills Required: ETL developer, SQL or Python developer, Netezza, etc.

Apply here.

Location: Bangalore

Skills Required: Machine learning, analytic consulting, product development, building predictive models, etc.

Apply here.

Location: Hyderabad

Skills Required: Hands-on data science, model building, boutique analytics consulting or captive analytics teams, statistical techniques, etc.

Apply here.

Location: Bangalore

Skills Required: Statistical techniques, statistical analysis tools (e.g., SAS, SPSS, R), statistical analysis tools (e.g., SAS, SPSS, R), etc.

Apply here.

Location: Bangalore

Skills Required: Probability, statistics, machine learning, data mining, artificial intelligence, big data platforms like Hadoop, spark, hive etc

Apply here.

Location: Thiruvananthapuram, Kerala

Skills Required: ML and DL approach, advanced Data/Text Mining/NLP/Computer Vision, Python, MLOps concepts, relational (MySQL) and non-relational / document databases (MongoDB/CouchDB), Microsoft Azure/AWS, etc.

Apply here.

Location: Bangalore

Skills Required: Data structures and algorithms, SQL, regex, HTTP, REST, JSON, XML, Maven, Git, JUnit, IntelliJ IDEA/Eclipse, etc.

Apply here.

Location: Delhi NCR, Bengaluru

Skills Required: Python, R, GA, Clevertap, Power BI, ML/DL algorithms, SQL, Advanced Excel, etc.

Apply here.

Location: Hyderabad

Skills Required: R language, Python, SQL, Power BI, Advance Excel, Geographical Information Systems (GIS), etc.

Apply here.

Location: Bangalore

Skills Required: Python, PySpark, MLib, Spark/Mesos, Hive, Hbase, Impala, OpenCV, NumPy, Matplotlib, SciPy, Google cloud, Azure cloud, AWS, Cloudera, Horton Works, etc.

Apply here.

Location: Mumbai

Skills Required: Programming languages (e.g. R, SAS, SPSS, Python), data visualisation techniques and software tools (e.g. Spotfire, SAS, R, Qlikview, Tableau, HTML5, D3), etc.

Apply here.

Location: Hyderabad

Skills Required: Neural networks, Python, data science, Pandas, SQL, Azure with Spark/Hadoop, etc.

Apply here.

Location: Bangalore

Skills Required: Strong statistical knowledge, statistical tools and techniques, Python, R, machine learning, etc.

Apply here.

Location: Bangalore

Skills Required: R or Python knowledge (Python+DS libraries, version control, etc.), ETL in SQL, Google/AWS platform, etc.

Apply here.

Location: Bangalore

Skills Required: R, Python, SLQ, working with and creating data architectures, machine learning techniques, advanced statistical techniques, C, C++, Java, JavaScript, Redshift, S3, Spark, DigitalOcean, etc.

Apply here.

Location: Bangalore

Skills Required: Data-gathering, pre-processing data, model building, coding languages, including Python and Pyspark, big data technology stack, etc.

View original post here:

50 Latest Data Science And Analytics Jobs That Opened Last Week - Analytics India Magazine

Written by admin

September 20th, 2020 at 10:56 pm

Posted in Machine Learning

Algorithms may never really figure us out thank goodness – The Boston Globe

Posted: at 10:56 pm


without comments

An unlikely scandal engulfed the British government last month. After COVID-19 forced the government to cancel the A-level exams that help determine university admission, the British education regulator used an algorithm to predict what score each student would have received on their exam. The algorithm relied in part on how the schools students had historically fared on the exam. Schools with richer children tended to have better track records, so the algorithm gave affluent students even those on track for the same grades as poor students much higher predicted scores. High-achieving, low-income pupils whose schools had not previously performed well were hit particularly hard. After threats of legal action and widespread demonstrations, the government backed down and scrapped the algorithmic grading process entirely. This wasnt an isolated incident: In the United States, similar issues plagued the International Baccalaureate exam, which used an opaque artificial intelligence system to set students' scores, prompting protests from thousands of students and parents.

These episodes highlight some of the pitfalls of algorithmic decision-making. As technology advances, companies, governments, and other organizations are increasingly relying on algorithms to predict important social outcomes, using them to allocate jobs, forecast crime, and even try to prevent child abuse. These technologies promise to increase efficiency, enable more targeted policy interventions, and eliminate human imperfections from decision-making processes. But critics worry that opaque machine learning systems will in fact reflect and further perpetuate shortcomings in how organizations typically function including by entrenching the racial, class, and gender biases of the societies that develop these systems. When courts and parole boards have used algorithms to forecast criminal behavior, for example, they have inaccurately identified Black defendants as future criminals more often than their white counterparts. Predictive policing systems, meanwhile, have led the police to unfairly target neighborhoods with a high proportion of non-white people, regardless of the true crime rate in those areas. Companies that have used recruitment algorithms have found that they amplify bias against women.

But there is an even more basic concern about algorithmic decision-making. Even in the absence of systematic class or racial bias, what if algorithms struggle to make even remotely accurate predictions about the trajectories of individuals' lives? That concern gains new support in a recent paper published in the Proceedings of the National Academy of Sciences. The paper describes a challenge, organized by a group of sociologists at Princeton University, involving 160 research teams from universities across the country and hundreds of researchers in total, including one of the authors of this article. These teams were tasked with analyzing data from the Fragile Families and Child Wellbeing Study, an ongoing study that measures various life outcomes for thousands of families who gave birth to children in large American cities around 2000. It is one of the richest data sets available to researchers: It tracks thousands of families over time, and has been used in more than 750 scientific papers.

The task for the teams was simple. They were given access to almost all of this data and asked to predict several important life outcomes for a sample of families. Those outcomes included the childs grade point average, their grit (a commonly used measure of passion and perseverance), whether the household would be evicted, the material hardship of the household, and whether the parent would lose their job.

The teams could draw on almost 13,000 predictor variables for each family, covering areas such as education, employment, income, family relationships, environmental factors, and child health and development. The researchers were also given access to the outcomes for half of the sample, and they could use this data to hone advanced machine-learning algorithms to predict each of the outcomes for the other half of the sample, which the organizers withheld. At the end of the challenge, the organizers scored the 160 submissions based on how well the algorithms predicted what actually happened in these peoples lives.

The results were disappointing. Even the best performing prediction models were only marginally better than random guesses. The models were rarely able to predict a students GPA, for example, and they were even worse at predicting whether a family would get evicted, experience unemployment, or face material hardship. And the models gave almost no insight into how resilient a child would become.

In other words, even having access to incredibly detailed data and modern machine learning methods designed for prediction did not enable the researchers to make accurate forecasts. The results of the Fragile Families Challenge, the authors conclude, with notable understatement, raise questions about the absolute level of predictive performance that is possible for some life outcomes, even with a rich data set.

Of course, machine learning systems may be much more accurate in other domains; this paper studied the predictability of life outcomes in only one setting. But the failure to make accurate predictions cannot be blamed on the failings of any particular analyst or method. Hundreds of researchers attempted the challenge, using a wide range of statistical techniques, and they all failed.

These findings suggest that we should doubt that big data can ever perfectly predict human behavior and that policymakers working in criminal justice policy and child-protective services should be especially cautious. Even with detailed data and sophisticated prediction techniques, there may be fundamental limitations on researchers' ability to make accurate predictions. Human behavior is inherently unpredictable, social systems are complex, and the actions of individuals often defy expectations.

And yet disappointing as this may be for technocrats and data scientists, it also suggests something reassuring about human potential. If life outcomes are not firmly pre-determined if an algorithm, given a set of past data points, cannot predict a persons trajectory then the algorithms limitations ultimately reflect the richness of humanitys possibilities.

Bryan Schonfeld and Sam Winter-Levy are PhD candidates in politics at Princeton University.

Read more here:

Algorithms may never really figure us out thank goodness - The Boston Globe

Written by admin

September 20th, 2020 at 10:56 pm

Posted in Machine Learning

Why Deep Learning DevCon Comes At The Right Time – Analytics India Magazine

Posted: at 10:56 pm


without comments

The Association of Data Scientists (ADaSci) recently announced Deep Learning DEVCON or DLDC 2020, a two-day virtual conference that aims to bring machine learning and deep learning practitioners and experts from the industry on a single platform to share and discuss recent developments in the field.

Scheduled for 29th and 30th October, the conference comes at a time when deep learning, a subset of machine learning, has become one of the most advancing technologies in the world. From being used in the fields of natural language processing to making self-driving cars, it has come a long way. As a matter of fact, reports suggest that by 2024, the deep learning market is expected to grow at a CAGR of 25%. Thus, it can easily be established that the advancements in the field of deep learning have just initiated and got a long road ahead.

Also Read: Top 7 Upcoming Deep Learning Conferences To Watch Out For

Being a crucial subset of artificial intelligence and machine learning, the advancements in deep learning have increased over the last few years. Thus, it has been explored in various industries, starting from healthcare and eCommerce to advertising and finance, by many leading firms as well as startups across the globe.

While companies like Waymo and Google are using deep learning for their self-driving vehicles, Apple is using the technology for its voice assistant Siri. Alongside many are using deep learning automatic text generation, handwriting recognition, relevant caption generation, image colourisation, predicting earthquakes as well as for detecting brain cancers.

In recent news, Microsoft has introduced new advancements in their deep learning optimisation library DeepSpeed to enable next-gen AI capabilities at scale. It can now be used to train language models with one trillion parameters with fewer GPUs.

With that being said, in future, it is expected to see an increased adoption machine translation, customer experience, content creation, image data augmentation, 3D printing and more. A lot of it could be attributed to the significant advancements in hardware space as well as the democratisation of technology, which helped the field in gaining traction.

Also Read: Free Online Resources To Get Hands-On Deep Learning

Many researchers and scientists across the globe have been working with deep learning technology to leverage it in fighting the deadly pandemic COVID-19. In fact, in recent news, some researchers have proposed deep learning-based automated CT image analysis tools that can differentiate COVID patients from the ones which arent infected. In another research, scientists have proposed a fully automatic deep learning system for diagnosing the disease as well as prognostic analysis. Many are also using deep neural networks for analysing X-ray images to diagnose COVID-19 among patients.

Along with these, startups like Zeotap, SilverSparro and Brainalyzed are leveraging the technology to either drive growth in customer intelligence or power industrial automation and AI solutions. With such solutions, these startups are making deep learning technology more accessible to enterprises and individuals.

Also Read: 3 Common Challenges That Deep Learning Faces In Medical Imaging

Companies like Shell, Lenskart, Snaphunt, Baker Hughes, McAfee, Lowes, L&T and Microsoft are looking for data scientists who are equipped with deep learning knowledge. With significant advancements in this field, it has now become the hottest skill that companies are looking for in their data scientists.

Consequently looking at these requirements, many edtech companies have started coming up with free online resources as well as paid certification on deep learning to provide industry-relevant knowledge to enthusiasts and professionals. These courses and accreditation, in turn, bridges the major talent gap that emerging technologies typically face during its maturation.

Also Read: How To Switch Careers To Deep Learning

With such major advancements in the field and its increasing use cases, the area of deep learning has witnessed an upsurge in popularity as well as demand. Thus it is critical, now more than ever, to understand this complex subject in-depth for better research purposes and application. For that matter, one needs to have a thorough understanding of the field to build a career in this ever-evolving field.

And, for this reason, the Deep Learning DEVCON couldnt have come at a better time than this. Not only it will help amateurs as well as professionals to get a better understanding of the field but will also provide them opportunities to network with leading developers and experts of the field.

Further, the talks and the workshops included in the event will provide a hands-on experience for deep learning practitioners on various tools and techniques. Starting with machine learning vs deep learning, followed by feed-forward neural networks and deep neural networks, the workshops will cover topics like GANs, recurrent neural networks, sequence modelling, Autoencoders, and real-time object detection. The two-day workshop will also provide an overview of deep learning as a broad topic, which will further be accredited with a certificate for all the attendees of the workshop.

The workshops will help participants have a strong understanding of deep learning, from basics to advanced, along with in-depth knowledge of artificial neural networks. With that, it will also clear concepts about tuning, regularising and improving the models as well as an understanding of various building blocks with their practical implementations. Alongside, it will also provide practical knowledge of applying deep learning in computer vision and NLP.

Considering the conference is virtual, it will also provide convenience for participants to join the talks and workshops from the comfort of their homes. Thus, a perfect opportunity to get a first-hand experience into the complex world of deep learning along with leading experts and best minds of the field, who will share their relevant experience to encourage enthusiasts and amateurs.

To register for Deep Learning DevCon 2020, visit here.

comments

Read this article:

Why Deep Learning DevCon Comes At The Right Time - Analytics India Magazine

Written by admin

September 20th, 2020 at 10:56 pm

Posted in Machine Learning

Six notable benefits of AI in finance, and what they mean for humans – Daily Maverick

Posted: at 10:55 pm


without comments

Addressing AI anxiety

A common narrative around emerging technologies like AI, machine learning, and robotic process automation is the anxiety and fear that theyll replace humans. In South Africa, with an unemployment rate of over 30%, these concerns are valid.

But if we dig deep into what we can do with AI, we learn it will elevate the work that humans do, making it more valuable than ever.

Sage research found that most senior financial decision-makers (90%) are comfortable with automation performing more of their day-to-day accounting tasks in the future, and 40% believe that AI and machine learning (ML) will improve forecasting and financial planning.

Whats more, two-thirds of respondents expect emerging technology to audit results continuously and to automate period-end reporting and corporate audits, reducing time to close in the process.

The key to realising these benefits is to secure buy-in from the entire organisation. With 87% of CFOs now playing a hands-on role in digital transformation, their perspective on technology is key to creating a digitally receptive team culture. And their leadership is vital in ensuring their organisations maximise their technology investments. Until employees make the same mindset shift as CFOs have, theyll need to be guided and reassured about the businesss automation strategy and the potential for upskilling.

Six benefits of AI in laymans terms

Speaking during an exclusive virtual event to announce the results of the CFO 3.0 research, as well as the launch of Sage Intacct in South Africa, Aaron Harris, CTO for the Sage, said one reason for the misperception about AIs impact on business and labour is that SaaS companies too often speak in technical jargon.

We talk about AI and machine learning as if theyre these magical capabilities, but we dont actually explain what they do and what problems they solve. We dont put it into terms that matter for business leaders and labour. We dont do a good job as an industry, explaining that machine learning isnt an outcome we should be looking to achieve its the technology that enables business outcomes, like efficiency gains and smarter predictive analytics.

For Harris, AI has remarkable benefits in six key areas:

Digital culture champions

Evolving from a traditional management style that relied on intuition, to a more contemporary one based on data-driven evidence, can be a culturally disruptive process. Interestingly, driving a cultural change wasnt a concern for most South African CFOs, with 73% saying their organisations are ready for more automation.

In fact, AI holds no fear for senior financial decision-makers: over two-thirds are not at all concerned about it, and only one in 10 believe that it will take away jobs.

So, how can businesses reimagine the work of humans when software bots are taking care of all the repetitive work?

How can we leverage the unique skills of humans, like collaboration, contextual understanding, and empathy?

The future world is a world of connections, says Harris. It will be about connecting humans in ways that allow them to work at a higher level. It will be about connecting businesses across their ecosystems so that they can implement digital business models to effectively and competitively operate in their markets. And it will be about creating connections across technology so that traditional, monolithic experiences are replaced with modern ones that reflect new ways of working and that are tailored to how individuals and humans will be most effective in this world.

New world of work

We can envision this world across three areas:

Sharing knowledge and timelines on strategic developments and explaining the significance of these changes will help CFOs to alleviate the fear of the unknown.

Technology may be the enabler driving this change, but how it transforms a business lies with those who are bold enough to take the lead. DM

Visit link:

Six notable benefits of AI in finance, and what they mean for humans - Daily Maverick

Written by admin

September 20th, 2020 at 10:55 pm

Posted in Machine Learning

Twitter is looking into why its photo preview appears to favor white faces over Black faces – The Verge

Posted: at 10:55 pm


without comments

Twitter it was looking into why the neural network it uses to generate photo previews apparently chooses to show white peoples faces more frequently than Black faces.

Several Twitter users demonstrated the issue over the weekend, posting examples of posts that had a Black persons face and a white persons face. Twitters preview showed the white faces more often.

The informal testing began after a Twitter user tried to post about a problem he noticed in Zooms facial recognition, which was not showing the face of a Black colleague on calls. When he posted to Twitter, he noticed it too was favoring his white face over his Black colleagues face.

Users discovered the preview algorithm chose non-Black cartoon characters as well.

When Twitter first began using the neural network to automatically crop photo previews, machine learning researchers explained in a blog post how they started with facial recognition to crop images, but found it lacking, mainly because not all images have faces:

Previously, we used face detection to focus the view on the most prominent face we could find. While this is not an unreasonable heuristic, the approach has obvious limitations since not all images contain faces. Additionally, our face detector often missed faces and sometimes mistakenly detected faces when there were none. If no faces were found, we would focus the view on the center of the image. This could lead to awkwardly cropped preview images.

Twitter chief design officer Dantley Davis tweeted that the company was investigating the neural network, as he conducted some unscientific experiments with images:

Liz Kelley of the Twitter communications team tweeted Sunday that the company had tested for bias but hadnt found evidence of racial or gender bias in its testing. Its clear that weve got more analysis to do, Kelley tweeted. Well open source our work so others can review and replicate.

Twitter chief technology officer Parag Agrawal tweeted that the model needed continuous improvement, adding he was eager to learn from the experiments.

See the rest here:

Twitter is looking into why its photo preview appears to favor white faces over Black faces - The Verge

Written by admin

September 20th, 2020 at 10:55 pm

Posted in Machine Learning

8 Trending skills you need to be a good Python Developer – iLounge

Posted: at 10:55 pm


without comments

Python, the general-purpose coding language has gained much popularity over the years. Speaking of web development, app designing, scientific computing or machine learning, Python has it all. Due to this favourability of Python in the market, python developers are also in high demand. They are required to be competent and out of the box thinkers- undoubtedly a race to win.

Are you one of those python developers? Do you find yourself lagging behind in proving your reliability? Maybe you are going wrong with some of your

skills. Never mind!

Im here to tell you of those 8 trendsetting skills you need to hone. Implement them and prove your expertise in the programming world. Come, lets take a look!

Being able to use the Python Library in its full potential also decides your expertise with this programming language. Python libraries like Panda, Matplotlib, Requests, Pyglet and more consist of reusable codes that youd wish to add to your programs. These libraries are boon to you as a developer. They will increase workflow and make task execution way easier. Nothing saves more time from having to write the whole code every time.

You might know how Python omits repeated code by using pre-developed frameworks. As a developer using a Python framework, you typically write code which conforms to some kind of conventions. Because of which it becomes easy to delegate responsibilities for the communications, infrastructure and low-level stuff to the framework. You can, therefore, concentrate on the logic of the application in your own code. If you have a good knack of these Python frameworks it can be a blessing, as it allows smooth flow of development. You may not know them all, but its advisable to keep up with some popular ones like Flask, Django and CherryPy.

Not sure of Python frameworks? You can seek help from Python Training Courses.

Object-relational mapping (ORM) is a programming method used to access a database. It exposes your database into a series of objects without writing commands to insert or retrieve data. It may sound complex, but can save you a lot of time, and help to control access to your database. ORM tools can also be customised by a Python developer.

Front end technologies like the HTML5, CSS3, and JavaScript will help you collaborate and work with a team of designers, marketers and other developers. Again, this can save a lot of development time.

A good Python developer should have sharp analytical skills. You are expected to observe and critically come up with complex ideas, solutions or decisions about coding. Talking of the analytical skills in Python you need to have:

Analytical skills are a mark of your additional knowledge in the field. Building your analytical skills also make you a better problem solver.

Python Developers have a bright future in Data Science. Companies on the run will prefer developers with Data Science knowledge to create innovative tech solutions. Knowing Python will also gain your knowledge of probability, statistics, data wrangling and SQL. All of these are significant aspects of Data Science.

Python is the right choice to grow in the Artificial Intelligence and Machine learning domain. It is an intuitive and minimalistic language with a full-featured library line (also called frameworks) which considerably reduces the time required to get your first results.

However, to master artificial intelligence and machine learning with Python you need to have a strong command over Python syntax. A fair grounding with calculus, data science and statistics can make you a pro. If you are a beginner, you can gain expertise in these areas by brushing up your math skills for Python Mathematical Libraries. Gradually, you can acquire your adequate Machine Learning skills by building simple neutral networks.

In the coming years, deep learning professionals will be well-positioned as there is a huge possibility awaiting in this field. With Python, you should be able to easily develop and evaluate deep learning models. Since deep learning is the advanced model of Machine Learning, to be able to bring it into complete functionality you should first get hands-on:

A good python developer is also a mixture of several soft skills like proactivity, communication and time management. Most of all, a career as a Python Developer is challenging, but at the same time interesting. Empowering yourself with these skill sets is sure to take you a long way. Push yourself from the comfort zone and work hard from today!

Read more:

8 Trending skills you need to be a good Python Developer - iLounge

Written by admin

September 20th, 2020 at 10:55 pm

Posted in Machine Learning

Automation Continuum – Leveraging AI and ML to Optimise RPA – Analytics Insight

Posted: at 10:55 pm


without comments

Over the past year, the appropriation of robotic process automation, especially progressed macros or robotic workers intended to automate the most ordinary, dull and time-exorbitant tasks has seen significant growth. As the technology develops alongside artificial intelligence and machine learning, the most encouraging future for knowledge workers is one where the simplicity of arrangement of RPA and the raw power of machine learning join to make more productive, more intelligent robotic workers.

One of the keys for adoption is companies would prefer not to trouble individuals with a lot of new tools and permit their environments to learn. For each situation, what companies attempt to do is, whatever UI theyre working in, possibly they make a widget, perhaps theres a dashboard that we can add a panel to that that would contain the data that is required. Adding to their current UI or including a stage over that routes things to the correct individual so they dont see 80% of the cases that they would have seen because they were automatically delegated and never got there.

Despite the fact that RPA today is invading pretty much every industry, the significant adopters of this tech are banks, insurance agencies, telecom firms and service organizations. This is on the grounds that organizations in these divisions for the most part have legacy systems and RPA solutions get effectively incorporated with their current functionalities.

Artificial intelligence is essentially about a computers capacity to imitate the human attitude, regardless of whether its tied in with recognising a picture or even taking care of an issue or a discussion.

You can consider Facebooks AI Research to see better. Here, the social media giant feeds the AI system with various pictures and the machine delivers accurate results. When a photograph of a dog is shown to the machine, it remembers it as a dog as well as recognised the breed.

RPA is an innovation that utilizes a particular set of rules and an algorithm and based on that it automates a task. While AI is centered more around doing a human-level undertaking, RPA is essentially a product that lessens human endeavors, it is tied in with saving the business and white-collar workers time. Probably the most well-known instances of RPA are moving data from one system to another, payroll processing, forms processing etc.

Despite the fact that AI strides ahead than RPA, these two technologies have the ability to take things to the next level if both are combined. For instance, assume you need your reports to be in a particular format to get them checked, and RPA carries out this responsibility. If you utilize an AI framework that would sift through the ineffectively formatted or inadmissible archives, the work of the RPA would be a lot simpler. Furthermore, this joint effort is called Automation Continuum.

The development of GPT-3, Generative Pre-prepared Transformer 3, is an incredible innovation that utilizes AI to use the immense amount of language data on the internet. Via training an extraordinarily enormous neural network, GPT-3 can comprehend and produce both human and programming languages with close human performance. For example, given a couple of sets of lawful agreements and plain English records, it can begin to automate the task of writing legal contracts in plain English. This sort of sophisticated automation was unimaginable with old style RPA devices without utilizing data and best in class AI.

Numerous activities, while redundant, require understanding and thought by a human with information and experience. Furthermore, this is the place the upcoming age of RPA devices can use AI. Humans are truly adept at responding to the topic of What else is significant or fascinating?. Artificial intelligence will help RPA tools go farther than essentially adding more factors to an inquiry. Artificial intelligence will permit RPA to make the next stride and answer the topic of What else?. Basically, the use of AI to RPA will permit these tools to expand the scope of what they can do.

Indeed, even giant organizations like IBM, Microsoft and SAP are tapping increasingly more to RPA. Which means, they are expanding the awareness and foothold of RPA programming. Besides, new sellers are additionally rising and at a fast pace and have begun to stamp their presence in the business.

Notwithstanding, it isnt simply RPA that is the discussion of the town, the function of AI is additionally one of the most huge things at present. The concept of Automation Continuum is getting famous among a lot of companies. The business is currently seeing their capacities and why not AI can read, tune in, and analyse and afterwards feed data into bots that can create output, package it, and send it off. Eventually, RPA and AI are two significant technologies that companies can use to help their companys digital transformation.

With organizations going through digital transformation, and maybe accelerating their efforts to deal with the effects of Covid-19 on their workforces, data is getting progressively significant. The optimization of RPA will profit extraordinarily from increased digitization in organizations. As organizations create data lakes and other new data storehouses that are available through APIs, it is critical to permit RPA tools to approach so they can be optimized.

While RPA has conveyed noteworthy advantages with regards to automation, the upcoming age of RPA will deliver more advantages using AI and machine learning through optimization. This isnt about quicker automation however, about better automation.

Read the original post:

Automation Continuum - Leveraging AI and ML to Optimise RPA - Analytics Insight

Written by admin

September 20th, 2020 at 10:55 pm

Posted in Machine Learning

UT Austin Selected as Home of National AI Institute Focused on Machine Learning – UT News | The University of Texas at Austin

Posted: August 27, 2020 at 3:50 am


without comments

AUSTIN, Texas The National Science Foundation has selected The University of Texas at Austin to lead the NSF AI Institute for Foundations of Machine Learning, bolstering the universitys existing strengths in this emerging field. Machine learning is the technology that drives AI systems, enabling them to acquire knowledge and make predictions in complex environments. This technology has the potential to transform everything from transportation to entertainment to health care.

UT Austin already among the worlds top universities for artificial intelligence is poised to develop entirely new classes of algorithms that will lead to more sophisticated and beneficial AI technologies. The university will lead a larger team of researchers that includes the University of Washington, Wichita State University and Microsoft Research.

This is another important step in our universitys ascension as a world leader in machine learning and tech innovation as a whole, and I am grateful to the National Science Foundation for their profound support, said UT Austin interim President Jay Hartzell. Many of the worlds greatest problems and challenges can be solved with the assistance of artificial intelligence, and its only fitting, given UTs history of accomplishment in this area along with the booming tech sector in Austin, that this new NSF institute be housed right here on the Forty Acres.

UT Austin is simultaneously establishing a permanent base for campuswide machine learning research called the Machine Learning Laboratory. It will house the new AI institute and bring together computer and data scientists, mathematicians, roboticists, engineers and ethicists to meet the institutes research goals while also working collaboratively on other interdisciplinary projects. Computer science professor Adam Klivans, who led the effort to win the NSF AI institute competition, will direct both the new institute and the Machine Learning Lab. Alex Dimakis, associate professor of electrical and computer engineering, will serve as the AI institutes co-director.

Machine learning can be used to predict which of thousands of recently formulated drugs might be most effective as a COVID-19 therapeutic, bypassing exhaustive laboratory trial and error, Klivans said. Modern datasets, however, are often diffuse or noisy and tend to confound current techniques. Our AI institute will dig deep into the foundations of machine learning so that new AI systems will be robust to these challenges.

Additionally, many advanced AI applications are limited by computational constraints. For example, algorithms designed to help machines recognize, categorize and label images cant keep up with the massive amount of video data that people upload to the internet every day, and advances in this field could have implications across multiple industries.

Dimakis notes that algorithms will be designed to train video models efficiently. For example, Facebook, one of the AI institutes industry partners, is interested in using these algorithms to make its platform more accessible to people with visual impairments. And in a partnership with Dell Medical School, AI institute researchers will test these algorithms to expedite turnaround time for medical imaging diagnostics, possibly reducing the time it takes for patients to get critical assessments and treatment.

The NSF is investing more than $100 million in five new AI institutes nationwide, including the $20 million project based at UT Austin to advance the foundations of machine learning.

In addition to Facebook, Netflix, YouTube, Dell Technologies and the city of Austin have signed on to transfer this research into practice.

The institute will also pursue the creation of an online masters degree in AI, along with undergraduate research programming and online AI courses for high schoolers and working professionals.

Austin-based tech entrepreneurs Zaib and Amir Husain, both UT Austin alumni, are supporting the new Machine Learning Laboratory with a generous donation to sustain its long-term mission.

The universitys strengths in computer science, engineering, public policy, business and law can help drive applications of AI, Amir Husain said. And Austins booming tech scene is destined to be a major driver for the local and national economy for decades to come.

The Machine Learning Laboratory is based in the Department of Computer Science and is a collaboration among faculty, researchers and students from across the university, including Texas Computing; Texas Robotics; the Department of Statistics and Data Sciences; the Department of Mathematics; the Department of Electrical and Computer Engineering; the Department of Information, Risk & Operations Management; the School of Information; the Good Systems AI ethics grand challenge team; the Oden Institute for Computational Engineering and Sciences; and the Texas Advanced Computing Center (TACC).

See the article here:

UT Austin Selected as Home of National AI Institute Focused on Machine Learning - UT News | The University of Texas at Austin

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Participation-washing could be the next dangerous fad in machine learning – MIT Technology Review

Posted: at 3:50 am


without comments

More promising is the idea of participation as justice. Here, all members of the design process work together in tightly coupled relationships with frequent communication. Participation as justice is a long-term commitment that focuses on designing products guided by people from diverse backgrounds and communities, including the disability community, which has long played a leading role here. This concept has social and political importance, but capitalist market structures make it almost impossible to implement well.

Machine learning extends the tech industrys broader priorities, which center on scale and extraction. That means participatory machine learning is, for now, an oxymoron. By default, most machine-learning systems have the ability to surveil, oppress, and coerce (including in the workplace). These systems also have ways to manufacture consentfor example, by requiring users to opt in to surveillance systems in order to use certain technologies, or by implementing default settings that discourage them from exercising their right to privacy.

Given that, its no surprise that machine learning fails to account for existing power dynamics and takes an extractive approach to collaboration. If were not careful, participatory machine learning could follow the path of AI ethics and become just another fad thats used to legitimize injustice.

How can we avoid these dangers? There is no simple answer. But here are four suggestions:

Recognize participation as work.Many people already use machine-learning systems as they go about their day. Much of this labor maintains and improves these systems and is therefore valuable to the systems owners. To acknowledge that, all users should be asked for consent and provided with ways to opt out of any system. If they chose to participate, they should be offered compensation. Doing this could mean clarifying when and how data generated by a users behavior will be used for training purposes (for example, via a banner in Google Maps or an opt-in notification). It would also mean providing appropriate support for content moderators, fairly compensating ghost workers, and developing monetary or nonmonetary reward systems to compensate users for their data and labor.

Make participation context specific. Rather than trying to use a one-size-fits-all approach, technologists must be aware of the specific contexts in which they operate. For example, when designing a system to predict youth and gang violence, technologists should continuously reevaluate the ways in which they build on lived experience and domain expertise, and collaborate with the people they design for. This is particularly important as the context of a project changes over time. Documenting even small shifts in process and context can form a knowledge base for long-term, effective participation. For example, should only doctors be consulted in the design of a machine-learning system for clinical care, or should nurses and patients be included too? Making it clear why and how certain communities were involved makes such decisions and relationships transparent, accountable, and actionable.

Plan for long-term participation from the start. People are more likely to stay engaged in processes over time if theyre able to share and gain knowledge, as opposed to having it extracted from them. This can be difficult to achieve in machine learning, particularly for proprietary design cases. Here, its worth acknowledging the tensions that complicate long-term participation in machine learning, and recognizing that cooperation and justice do not scale in frictionless ways. These values require constant maintenance and must be articulated over and over again in new contexts.

Learn from past mistakes. More harm can be done by replicating the ways of thinking that originally produced harmful technology. We as researchers need to enhance our capacity for lateral thinking across applications and professions. To facilitate that, the machine-learning and design community could develop a searchable database to highlight failures of design participation (such as Sidewalk Labs waterfront project in Toronto). These failures could be cross-referenced with socio-structural concepts (such as issues pertaining to racial inequality). This database should cover design projects in all sectors and domains, not just those in machine learning, and explicitly acknowledge absences and outliers. These edge cases are often the ones we can learn the most from.

Its exciting to see the machine-learning community embrace questions of justice and equity. But the answers shouldnt bank on participation alone. The desire for a silver bullet has plagued the tech community for too long. Its time to embrace the complexity that comes with challenging the extractive capitalist logic of machine learning.

Mona Sloane is a sociologist based at New York University. She works on design inequality in the context of AI design and policy.

Here is the original post:

Participation-washing could be the next dangerous fad in machine learning - MIT Technology Review

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning

Getting to the heart of machine learning and complex humans – The Irish Times

Posted: at 3:50 am


without comments

Abeba Birhane: I study embodied cognitive science, which is at the heart of how people interact and go about their daily lives and what it means to be a person.

You recently made a big discovery that an academic library containing millions of images used to train artificial intelligence systems had privacy and ethics issues, and that it included racist, misogynistic and other offensive content.

Yes, I worked on this with Vinay Prabhu a chief scientist at UnifyID, a privacy start-up in Silicon Valley on the 80-million images dataset curated by Massachusetts Institute of Technology. We spent about months looking through this dataset, and we found thousands of images labelled with insults and derogatory terms.

Using this kind of content to build and train artificial intelligence systems, including face recognition systems, would embed harmful stereotypes and prejudices and could have grave consequences for individuals in the real world.

What happened when you published the findings?

The media picked up on it, so it got a lot of publicity. MIT withdrew the database and urged people to delete their copies of the data. That was humbling and a nice result.

How does this finding fit in to your PhD research?

I study embodied cognitive science, which is at the heart of how people interact and go about their daily lives and what it means to be a person. The background assumption is that people are ambiguous, they come to be who they are through interactions with other people.

It is a different perspective to traditional cognitive science, which is all about the brain and rationality. My research looks at how artificial intelligence and machine learning has limits in how it can understand and predict the complex messiness of human behaviour and social outcomes.

Can you give me an example?

If you take the Shazam app, it works very well to recognise a piece of music that you play to it. It searches for the pattern of the music in a database, and this narrow search suits the machine approach. But predicting a social outcome from human characteristics is very different.

As humans we have infinite potentials, we can react to situations in different ways, and a machine that uses numerable parameters cannot predict whether someone is a good hire or at risk of committing a crime in the future. Humans and our interactions represent more than just a few parameters. My research looks at existing machine learning systems and the ethics of this dilemma.

How did you get into this work?

I started in physics back home in Ethiopia, but when I came to Ireland there was so much paperwork and so many exams to translate my Ethiopian qualification that I decided to start from scratch.

So I studied psychology and philosophy and I did a masters [masters course had lots of elements neuroscience, philosophy, anthropology, and computer science, where we built computational models of various cognitive faculties and it is where I really found my place.

How has Covid-19 affected your research?

At the start of the pandemic, I thought this might be a chance to write up a lot of my project, but I found it hard to work at home and to unhook my mind from what was going on around the world.

I also missed the social side, going for coffee and talking with my colleagues about work and everything else. So I am glad to be back in the lab now and seeing my lab mates even at a distance.

Original post:

Getting to the heart of machine learning and complex humans - The Irish Times

Written by admin

August 27th, 2020 at 3:50 am

Posted in Machine Learning


Page 3«..2345..10..»