Federated machine learning is coming – here’s the questions we should be asking – Diginomica

Posted: January 27, 2020 at 8:47 pm


without comments

A few years ago, I wondered how edge data would ever be useful given the enormous cost of transmitting all the data to either the centralized data center or some variant of cloud infrastructure. (It is said that 5G will solve that problem).

Consider, for example, applications of vast sensor networks that stream a great deal of data at small intervals. Vehicles on the move are a good example.

There is telemetry from cameras, radar, sonar, GPS and LIDAR, the latter about 70MB/sec. This could quickly amount to four terabytes per day (per vehicle). How much of this data needs to be retained? Answers I heard a few years ago were along two lines:

My counterarguments at the time were:

Introducing TensorFlow federated, via The TensorFlow Blog:

This centralized approach can be problematic if the data is sensitive or expensive to centralize. Wouldn't it be better if we could run the data analysis and machine learning right on the devices where that data is generated, and still be able to aggregate together what's been learned?

Since I looked at this a few years ago, the distinction between an edge device and a sensor has more or less disappeared. Sensors can transmit via wifi (though there is an issue of battery life, and if they're remote, that's a problem); the definition of the edge has widened quite a bit.

Decentralized data collection and processing have become more powerful and able to do an impressive amount of computing. The case is point in Intel's Introducing the Intel Neural Compute Stick 2 computer vision and deep learning accelerator powered by the Intel Movidius Myriad X VPU, that can stick into a Pi for less than $70.00.

But for truly distributed processing, the Apple A13 chipset in the iPhone 11 has a few features that boggle the mind: From Inside Apple's A13 Bionic system-on-chip Neural Engine, a custom block of silicon separate from the CPU and GPU, focused on accelerating Machine Learning computations. The CPU has a set of "machine learning accelerators" that perform matrix multiplication operations up to six times faster than the CPU alone. It's not clear how exactly this hardware is accessed, but for tasks like machine learning (ML) that use lots of matrix operations, the CPU is a powerhouse. Note that this matrix multiplication hardware is part of the CPU cores and separate from the Neural Engine hardware.

This should beg the question, "Why would a smartphone have neural net and machine learning capabilities, and does that have anything to do with the data transmission problem for the edge?" A few years ago, I thought the idea wasn't feasible, but the capability of distributed devices has accelerated. How far-fetched is this?

Let's roll the clock back thirty years. The finance department of a large diversified organization would prepare in the fall a package of spreadsheets for every part of the organization that had budget authority. The sheets would start with low-level detail, official assumptions, etc. until they all rolled up to a small number of summary sheets that were submitted headquarters. This was a terrible, cumbersome way of doing things, but it does, in a way, presage the concept of federated learning.

Another idea that vanished is Push Technology that shared the same network load as centralizing sensor data, just in the opposite direction. About twenty-five years, when everyone had a networked PC on their desk, the PointCast Network used push technology. Still, it did not perform as well as expected, often believed to be because its traffic burdened corporate networks with excessive bandwidth use, and was banned in many places. If Federated Learning works, those problems have to be addressed

Though this estimate changes every day, there are 3 billion smartphones in the world and 7 billion connected devices.You can almost hear the buzz in the air of all of that data that is always flying around. The canonical image of ML is that all of that data needs to find a home somewhere so that algorithms can crunch through it to yield insights. There are a few problems with this, especially if the data is coming from personal devices, such as smartphones, Fitbit's, even smart homes.

Moving highly personal data across the network raises privacy issues. It is also costly to centralize this data at scale. Storage in the cloud is asymptotically approaching zero in cost, but the transmission costs are not. That includes both local WiFi from the devices (or even cellular) and the long-distance transmission from the local collectors to the central repository. This s all very expensive at this scale.

Suppose, large-scale AI training could be done on each device, bringing the algorithm to the data, rather than vice-versa? It would be possible for each device to contribute to a broader application while not having to send their data over the network. This idea has become respectable enough that it has a name - Federated Learning.

Jumping ahead, there is no controversy that training a network without compromising device performance and user experience, or compressing a model and resorting to a lower accuracy are not alternatives. In Federated Learning: The Future of Distributed Machine Learning:

To train a machine learning model, traditional machine learning adopts a centralized approach that requires the training data to be aggregated on a single machine or in a datacenter. This is practically what giant AI companies such as Google, Facebook, and Amazon have been doing over the years. This centralized training approach, however, is privacy-intrusive, especially for mobile phone usersTo train or obtain a better machine learning model under such a centralized training approach, mobile phone users have to trade their privacy by sending their personal data stored inside phones to the clouds owned by the AI companies.

The federated learning approach decentralizes training across mobile phones dispersed across geography. The presumption is that they collaboratively develop machine learning while keeping their personal data on their phones. For example, building a general-purpose recommendation engine for music listeners. While the personal data and personal information are retained on the phone, I am not at all comfortable that data contained in the result sent to the collector cannot be reverse-engineered - and I havent heard a convincing argument to the contrary.

Here is how it works. A computing group, for example, is a collection of mobile devices that have opted to be part of a large scale AI program. The device is "pushed" a model and executes it locally and learns as the model processes the data. There are some alternatives to this. Homogeneous models imply that every device is working with the same schema of data. Alternatively, there are heterogeneous models where harmonization of the data happens in the cloud.

Here are some questions in my mind.

Here is the fuzzy part: federated learning sends the results of the learning as well as some operational detail such as model parameters and corresponding weights back to the cloud. How does it do that and preserve your privacy and not clog up your network? The answer is that the results are a fraction of the data, and since the data itself is not more than a few Gb, that seems plausible. The results sent to the cloud can be encrypted with, for example, homomorphic encryption (HE). An alternative is to send the data as a tensor, which is not encrypted because it is not understandable by anything but the algorithm. The update is then aggregated with other user updates to improve the shared model. Most importantly, all the training data remains on the user's devices.

In CDO Review, The Future of AI. May Be In Federated Learning:

Federated Learning allows for faster deployment and testing of smarter models, lower latency, and less power consumption, all while ensuring privacy. Also, in addition to providing an update to the shared model, the improved (local) model on your phone can be used immediately, powering experiences personalized by the way you use your phone.

There is a lot more to say about this. The privacy claims are a little hard to believe. When an algorithm is pushed to your phone, it is easy to imagine how this can backfire. Even the tensor representation can create a problem. Indirect reference to real data may be secure, but patterns across an extensive collection can surely emerge.

Originally posted here:

Federated machine learning is coming - here's the questions we should be asking - Diginomica

Related Posts

Written by admin |

January 27th, 2020 at 8:47 pm

Posted in Machine Learning




matomo tracker