Jazoon 2017 AI meet Developers Conference Review

Hi I had the opportunity to participate to this conference in Zurich on the 27 October 2017 and attend to the following sessions:

  • Build Your Intelligent Enterprise with SAP Machine Learning
  • Applied AI: Real-World Use Cases for Microsoft’s Azure Cognitive Services
  • Run Deep Learning models in the browser with JavaScript and ConvNetJS
  • Using messaging and AI to build novel user interfaces for work
  • JVM based DeepLearning on IoT data with Apache Spark
  • Apache Spark for Machine Learning on Large Data Sets
  • Anatomy of an open source voice assistant
  • Building products with TensorFlow

Most of the sessions have been recorded and they are available here:

https://www.youtube.com/channel/UC9kq7rpecrCX7S_ptuA20OA

The first session has been a more a sales/pre-recorded demos presentation of SAP capabilities in terms of AI mainly in their cloud:

1

But with some interesting ideas like the Brand Impact Video analyzer that computes how much airtime is filled by specific brands inside a video:

2

And another good use case representation is the defective product automatic recognition using image similarity distance API:

3

The second session has been around the new AI capabilities offered by Microsoft and divided into two parts:

Capabilities for data scientists that want to build their python models

  • Azure Machine Learning Workbench that is an electron based desktop app that mainly accelerates the data preparation tasks using “a learn by example” engine that creates on the fly data preparation code.

4

  • Azure Notebooks a free but limited Cloud Based Jupyter Notebook environment to share and re-use models/notebooks

5

  • Azure Data Science Virtual Machine a pre-built VM with all the most common DS packages (TensorFlow, Caffe, R, Python, etc..)

6

Capabilities (i.e. Face/Age/Sentiment/OCR/Hand written detection) for developers that want to consume Microsoft pre-trained models calling directly Microsoft Cognitive API

7

8

The third session has been more an “educational presentation” around deep learning, and how at high level a deep learning system work, however we have seen in this talk some interesting topics:

  • The existence of several pre-trained models that can be used as is especially for featurization purposes and/or for transfer learning

9

  • How to visualize neural networks with web sites like http://playground.tensorflow.org
  • A significant amount of demos that can show case DNN applications that can run directly in the browser

The fourth session has been one also an interesting session, because the speaker clearly explained the current possibilities and limits of the current application development landscape and in particular of the enterprise bots.

10

Key take away: Bots are far from being smart and people don’t want to type text.

Suggested approach bots are new apps that are reaching their “customers” in the channels that they already use (slack for example) and those new apps using the context and channel functionalities have to extend and at the same time simplify the IT landscape.

11

Example: bot in a slack channel that notifies manager of an approval request and the manager can approve/deny directly in slack without leaving the app.

The fourth and the fifth talk have been rather technical/educational on specific frameworks (IBM System ML for Spark) and on models portability (PMML) with some good points around hyper parameter tuning using a spark cluster in iterative mode and DNN auto encoders.

12

13

The sixth talk has been about the open source voice assistant MyCroft and the related open source device schemas.

The session has been principally made on live demos showcasing several open source libraries that can be used to create a device with Alexa like capabilities:

  • Pocketsphinx for speechrecognition
  • Padatious for NLP intent detection
  • Mimic for text to speech
  • Adapt Intent parser

14

The last session was on tensor flow but also in general experiences around AI coming from Google, like how ML is used today:

15

And how Machine Learning is fundamental today with quotes like this:

  • Remember in 2010, when the hype was mobile-first? Hype was right. Machine Learning is similarly hyped now. Don’t get left behind
  • You must consider the user journey, the entire system. If users touch multiple components to solve a problem, transition must be seamless

Other pieces of advice where around talent research and maintain/grow/spread ML inside your organization :

How to hire ML experts:

  1. don’t ask a Quant to figure out your business model
  2. design autonomy
  3. $$$ for compute & data acquisition
  4. Never done!

How to Grow ML practice:

  1. Find ML Ninja (SWE + PM)
  2. Do Project incubation
  3. Do ML office hours / consulting

How to spread the knowledge:

  1. Build ML guidelines
  2. Perform internal training
  3. Do open sourcing

And on ML algorithms project prioritization and execution:

  1. Pick algorithms based on the success metrics & data you can get
  2. Pick a simple one and invest 50% of time into building quality evaluation of the model
  3. Build an experiment framework for eval & release process
  4. Feedback loop

Overall the quality has been good even if I was really disappointed to discover in the morning that one the most interesting session (with the legendary George Hotz!) has been cancelled.

Annunci

AI is progressing at incredible speed!

Several people tend to think that all the new AI technologies like Convolutional neural networks, Recurrent Neural Networks, Generative adversarial networks,etc.. are used mainly in tech giants like Google , Microsoft , etc.. , in reality many enterprises are already leveraging deep learning in production like Zalando, Instacart and many others . Well known deep learning frameworks like Keras, Tensorflow, CNTK, Caffe2, etc.. are now finally reaching a larger audience.

Big data engines like Spark are finally able to pilot also deep learning workloads and also the first steps to make large deep neural networks models fit inside small cpu/low memory, occasionally connected IOT devices are coming.

Finally new hardware has been built specifically for deep learning :

  1. https://www.microsoft.com/en-us/research/blog/microsoft-unveils-project-brainwave/
  2. https://cloud.google.com/blog/big-data/2017/05/an-in-depth-look-at-googles-first-tensor-processing-unit-tpu
  3. https://www.nvidia.com/en-us/data-center/volta-gpu-architecture/

But the AI space never sleeps and we are already seeing arriving new solutions/frameworks and architectures that are actually under development:

Ray: the new distributed execution framework that aims to replace the well known Spark!

Pytorch and fast.ai framework that wants to compete and beat all the existing deep learning frameworks

To overcome one of the biggest problems on deep learning : amount of training data, snorkel a new framework has been designed to create brand new training data with little human interaction

Finally to help to create a better integration and performance of the deep learning models with the applications that want to consume those models a new prediction serving system Clipper has been designed .

The speed of AI evolution is incredible and be prepared to see much more than this in the near future!

My Top 2 Microsoft Build 2017 Sessions

Let’s start with Number 1 this is the Visionary Cloud that is arriving , compute nodes combined with FPGA neurons that act as Hardware Micro services communicating and changing their internal code directly attached to the Azure Network , like a global neural network. Do you want to know more? Click here and check directly with the new video index AI the content of this presentation jumping directly on the portions of the video that you like, searching words, concepts and people appearing in the video.

We can then look here at Number 2 (go to 1:01:54) :

Matt Velloso teaching to a Robot (Zenbo) how to recognize the images the robot sees using Microsoft Bot Framework and the new Custom Image Recognition Service.

Do you want to explore more?

Go here at channel9 and have fun exploring all the massive updates that has been released!

Image recognition for everyone

Warning : I’m NOT a data scientist, but I huge fan of cool technology !

Today I want to write of a new functionality that amazes me and that can help you to literally do “magic” things that you can think can be exclusive of super data scientists expert of deep learning and frameworks like TensorFlow, CNTK, Caffe,etc…

Imagine the following: someone  trains huge neural networks (imagine like mini brains) for weeks/months using a lot of GPUs on thousands and thousands of images.

These mini brains are then used to classify images and say something like: a car is present, a human is present, a cat etc… . Now one of “bad things” of neural networks is that usually you cannot understand how they really work internally and what is the “thinking process” of a neural network.

featurization

However latest studies on neural networks have found a way to “extract” this knowledge and Microsoft has delivered right now in April this knowledge or better these models.

Now I want to show you an example on how to do this.

Let’s grab some images of the Simpsons :

59381

and some other images of the Flintstones:

2BF47AA000000578-3221746-image-a-102_1441327664251

For example 13 images of Simpson cartoon and 11 of Flintstones. And let’s build a program that can predict given a new image that is not part of the two image sets if it is a Simpson or Flintstone image. I’ve chosen cartoons but you can apply this to any image that you want to process (watches? consoles? vacation places? etc…).

The idea is the following: I take the images I have and give these images to the “model” that has been trained . Now the result of this process will be , for each image, a collection of “numbers” that are the representation of that image according to the neural network. An example to understand that: our DNA is a small fraction of ourselves but it can “represent” us, so these “numbers” are the DNA of the image.

Now that we have the image represented by a simple array of numbers, we can use a “normal” machine learning technique like linear regression to leverage this simplified representation of the image and learn how to classify them.

Applying the sample R code described in the article to only a small sample of images (13 and 11 respectively) using 80% for training and 20% for scoring we obtained the following result:

Score

A good 75% on a very small amount of images !