Hi I had the opportunity to participate to this conference in Zurich on the 27 October 2017 and attend to the following sessions:
- Build Your Intelligent Enterprise with SAP Machine Learning
- Applied AI: Real-World Use Cases for Microsoft’s Azure Cognitive Services
- Using messaging and AI to build novel user interfaces for work
- JVM based DeepLearning on IoT data with Apache Spark
- Apache Spark for Machine Learning on Large Data Sets
- Anatomy of an open source voice assistant
- Building products with TensorFlow
Most of the sessions have been recorded and they are available here:
The first session has been a more a sales/pre-recorded demos presentation of SAP capabilities in terms of AI mainly in their cloud:
But with some interesting ideas like the Brand Impact Video analyzer that computes how much airtime is filled by specific brands inside a video:
And another good use case representation is the defective product automatic recognition using image similarity distance API:
The second session has been around the new AI capabilities offered by Microsoft and divided into two parts:
Capabilities for data scientists that want to build their python models
- Azure Machine Learning Workbench that is an electron based desktop app that mainly accelerates the data preparation tasks using “a learn by example” engine that creates on the fly data preparation code.
- Azure Notebooks a free but limited Cloud Based Jupyter Notebook environment to share and re-use models/notebooks
- Azure Data Science Virtual Machine a pre-built VM with all the most common DS packages (TensorFlow, Caffe, R, Python, etc..)
Capabilities (i.e. Face/Age/Sentiment/OCR/Hand written detection) for developers that want to consume Microsoft pre-trained models calling directly Microsoft Cognitive API
The third session has been more an “educational presentation” around deep learning, and how at high level a deep learning system work, however we have seen in this talk some interesting topics:
- The existence of several pre-trained models that can be used as is especially for featurization purposes and/or for transfer learning
- How to visualize neural networks with web sites like http://playground.tensorflow.org
- A significant amount of demos that can show case DNN applications that can run directly in the browser
The fourth session has been one also an interesting session, because the speaker clearly explained the current possibilities and limits of the current application development landscape and in particular of the enterprise bots.
Key take away: Bots are far from being smart and people don’t want to type text.
Suggested approach bots are new apps that are reaching their “customers” in the channels that they already use (slack for example) and those new apps using the context and channel functionalities have to extend and at the same time simplify the IT landscape.
Example: bot in a slack channel that notifies manager of an approval request and the manager can approve/deny directly in slack without leaving the app.
The fourth and the fifth talk have been rather technical/educational on specific frameworks (IBM System ML for Spark) and on models portability (PMML) with some good points around hyper parameter tuning using a spark cluster in iterative mode and DNN auto encoders.
The sixth talk has been about the open source voice assistant MyCroft and the related open source device schemas.
The session has been principally made on live demos showcasing several open source libraries that can be used to create a device with Alexa like capabilities:
- Pocketsphinx for speechrecognition
- Padatious for NLP intent detection
- Mimic for text to speech
- Adapt Intent parser
The last session was on tensor flow but also in general experiences around AI coming from Google, like how ML is used today:
And how Machine Learning is fundamental today with quotes like this:
- “Remember in 2010, when the hype was mobile-first? Hype was right. Machine Learning is similarly hyped now. Don’t get left behind”
- “You must consider the user journey, the entire system. If users touch multiple components to solve a problem, transition must be seamless”
Other pieces of advice where around talent research and maintain/grow/spread ML inside your organization :
How to hire ML experts:
- don’t ask a Quant to figure out your business model
- design autonomy
- $$$ for compute & data acquisition
- Never done!
How to Grow ML practice:
- Find ML Ninja (SWE + PM)
- Do Project incubation
- Do ML office hours / consulting
How to spread the knowledge:
- Build ML guidelines
- Perform internal training
- Do open sourcing
And on ML algorithms project prioritization and execution:
- Pick algorithms based on the success metrics & data you can get
- Pick a simple one and invest 50% of time into building quality evaluation of the model
- Build an experiment framework for eval & release process
- Feedback loop
Overall the quality has been good even if I was really disappointed to discover in the morning that one the most interesting session (with the legendary George Hotz!) has been cancelled.