UniFi – Install a UniFi Cloud Controller on Google Cloud Platform Compute Engine

Let’s see this time how to set up the UniFi Controller software on GCP with very simple steps.

Step 1 : Go the following website https://console.cloud.google.com/ and register . You will receive 300$ of Google credits that can be used in the first 12 month, but more importantly the free tier  !

Step 2: Once you have your account up and running you can provision a Linux Instance clicking on the big “Compute Engine” and VM instances

Step 3: After you selected the Virtual Machine just give to it a name,, choose a data center near to where you live, pick as size the micro (free!), as OS pick Ubuntu 16.04

Step 4: Set the a network tag for this Instance it will be used later and set ssh keys if needed ( you can do everything with the web ssh console without having to specify this)

Step 5: In the additional settings leave everything to the default values and finally hit the create button!

Step 6: After few seconds your Instance is ready and you should be able to see it running , write down the Public IP Address because you will need it shortly.

Step 7: Now we have to setup the open ports in order to have the Controller working correctly.

First go the the VPC network tab of your account and select Firewall Rules:

Here add a firewall rule specific for your controller instance using the Target tag that we defined early, put 0.0.0.0/0 as IP range to allow connections from any IP and set those ports to be open:

tcp:8443;tcp:8080;tcp:8843;tcp:8880;tcp:6789;udp:3478 .  

Here a screenshot of those settings:

Step 8: Connect with the web console ssh and install the Unifi controller software with those commands:

echo “deb http://www.ubnt.com/downloads/unifi/debian unifi5 ubiquiti” | sudo tee -a /etc/apt/sources.list

sudo apt-key adv –keyserver keyserver.ubuntu.com –recv 06E85760C0A52C50

sudo apt-get update

sudo apt-get install unifi

Step 9: Connect to the controller web interface located here https://IP_Address:8443/ and complete the UniFi wizard:

Finally you may now proceed to adopt your UniFi devices using Layer 3 Adoption !

Annunci

Jazoon 2017 AI meet Developers Conference Review

Hi I had the opportunity to participate to this conference in Zurich on the 27 October 2017 and attend to the following sessions:

  • Build Your Intelligent Enterprise with SAP Machine Learning
  • Applied AI: Real-World Use Cases for Microsoft’s Azure Cognitive Services
  • Run Deep Learning models in the browser with JavaScript and ConvNetJS
  • Using messaging and AI to build novel user interfaces for work
  • JVM based DeepLearning on IoT data with Apache Spark
  • Apache Spark for Machine Learning on Large Data Sets
  • Anatomy of an open source voice assistant
  • Building products with TensorFlow

Most of the sessions have been recorded and they are available here:

https://www.youtube.com/channel/UC9kq7rpecrCX7S_ptuA20OA

The first session has been a more a sales/pre-recorded demos presentation of SAP capabilities in terms of AI mainly in their cloud:

1

But with some interesting ideas like the Brand Impact Video analyzer that computes how much airtime is filled by specific brands inside a video:

2

And another good use case representation is the defective product automatic recognition using image similarity distance API:

3

The second session has been around the new AI capabilities offered by Microsoft and divided into two parts:

Capabilities for data scientists that want to build their python models

  • Azure Machine Learning Workbench that is an electron based desktop app that mainly accelerates the data preparation tasks using “a learn by example” engine that creates on the fly data preparation code.

4

  • Azure Notebooks a free but limited Cloud Based Jupyter Notebook environment to share and re-use models/notebooks

5

  • Azure Data Science Virtual Machine a pre-built VM with all the most common DS packages (TensorFlow, Caffe, R, Python, etc..)

6

Capabilities (i.e. Face/Age/Sentiment/OCR/Hand written detection) for developers that want to consume Microsoft pre-trained models calling directly Microsoft Cognitive API

7

8

The third session has been more an “educational presentation” around deep learning, and how at high level a deep learning system work, however we have seen in this talk some interesting topics:

  • The existence of several pre-trained models that can be used as is especially for featurization purposes and/or for transfer learning

9

  • How to visualize neural networks with web sites like http://playground.tensorflow.org
  • A significant amount of demos that can show case DNN applications that can run directly in the browser

The fourth session has been one also an interesting session, because the speaker clearly explained the current possibilities and limits of the current application development landscape and in particular of the enterprise bots.

10

Key take away: Bots are far from being smart and people don’t want to type text.

Suggested approach bots are new apps that are reaching their “customers” in the channels that they already use (slack for example) and those new apps using the context and channel functionalities have to extend and at the same time simplify the IT landscape.

11

Example: bot in a slack channel that notifies manager of an approval request and the manager can approve/deny directly in slack without leaving the app.

The fourth and the fifth talk have been rather technical/educational on specific frameworks (IBM System ML for Spark) and on models portability (PMML) with some good points around hyper parameter tuning using a spark cluster in iterative mode and DNN auto encoders.

12

13

The sixth talk has been about the open source voice assistant MyCroft and the related open source device schemas.

The session has been principally made on live demos showcasing several open source libraries that can be used to create a device with Alexa like capabilities:

  • Pocketsphinx for speechrecognition
  • Padatious for NLP intent detection
  • Mimic for text to speech
  • Adapt Intent parser

14

The last session was on tensor flow but also in general experiences around AI coming from Google, like how ML is used today:

15

And how Machine Learning is fundamental today with quotes like this:

  • Remember in 2010, when the hype was mobile-first? Hype was right. Machine Learning is similarly hyped now. Don’t get left behind
  • You must consider the user journey, the entire system. If users touch multiple components to solve a problem, transition must be seamless

Other pieces of advice where around talent research and maintain/grow/spread ML inside your organization :

How to hire ML experts:

  1. don’t ask a Quant to figure out your business model
  2. design autonomy
  3. $$$ for compute & data acquisition
  4. Never done!

How to Grow ML practice:

  1. Find ML Ninja (SWE + PM)
  2. Do Project incubation
  3. Do ML office hours / consulting

How to spread the knowledge:

  1. Build ML guidelines
  2. Perform internal training
  3. Do open sourcing

And on ML algorithms project prioritization and execution:

  1. Pick algorithms based on the success metrics & data you can get
  2. Pick a simple one and invest 50% of time into building quality evaluation of the model
  3. Build an experiment framework for eval & release process
  4. Feedback loop

Overall the quality has been good even if I was really disappointed to discover in the morning that one the most interesting session (with the legendary George Hotz!) has been cancelled.

Helping Troy Hunt for fun and profit

Hi everyone,  I’m a huge fan of the security expert Troy Hunt and of his incredible “free!” service haveibeenpwned.com (if you don’t know it, please use it now! to test if your email accounts are compromised ! ).

Troy-Hunt-Profile-Photo

Now Troy has created a contest where you can actually win a shiny Lenovo laptop, if you create something “new” that can help people to be more aware of the security risks related to pwned accounts.

I decided to participate and my idea is the following, helping all the people that have gmail (and Hotmail/outlook/office 365 in alpha version!) accounts to verify if their friends, colleagues and family members have their email accounts compromised.

I uploaded the code and executables here and I strongly suggest you to read ENTIRELY the readme instructions to understand how the tool works, what are expected results and what you can do.

Regardless if I win the laptop or not, I already won because I was able, thanks to this tool, to alert my wife and some of my friends of the danger and to have the right “push” to convince them to setup two-factor authentication.

If you want to donate , for this effort please donate directly to Troy here, he deserves a good beer !

 

How to create the perfect Matchmaker with Bot Framework and Cognitive Services

Hi everyone, this time I wanted to showcase some of the many capabilities of Microsoft Cognitive Services using a “cupido”   bot built with Microsoft Bot Framework .

So what is the plan? Here some ideas:

  • Leverage only Facebook as channel! Why? Because with facebook you have people already “logged in” and you can leverage the messenger profile api to retrieve automatically the user details and more importantly his facebook photo!
  • Since usually the facebook photo is an image with a face , we can use this image with Vision and Face Api to understand gender, age and bunch of other interesting info without any user interaction!
  • We can score with a custom vision model that we trained using some publicly available images if a person looks like a super model or not 😉
  • Using all this info (age, gender, makeup, sunglasses, super model or not, hair color, etc…) collected with all those calls we can decide which candidates inside our database are the right ones for our user and display the ones that are fitting according to our demo rules.

Of course at the beginning our database of profiles will be empty , but with help of friends / colleagues we can quickly fill it and have fun during the demo.

So in practice how does it look like?

Here the first interaction, after saying hello the bot immediately personalizes the experience with our facebook data (foto, name, last name,etc..) and asks if we want to participate to the experiment:

After accepting it uses the described APIs to understand the image and calculate age, hair, super model score, etc…

Yeah, I know my super model score is not really good, but let’s see if there are any matches for me anyway….

Of course the bot is smart enough to display the profile of my wife otherwise I was really in a big problem :-).

Now I guess many of you have this question: how the super model score is calculated?

Well I trained the custom vision service of Microsoft with 30+ photos of real models and 30+ photos of “normal people” and after 4 iterations I had already a 90% accuracy on detecting super models in photos 😉

Of course there are several things to consider here:

  1. Images should be the focus of the picture
  2. have sufficiently diverse images, angles, lighting, and backgrounds
  3. Train with images that are similar (in quality) to the images that will be used in scoring

And we have for sure super model pics that have larger resolution, better lighting and good exposure vs the photos of “normal” people like you and me, but for the purposes of this demo the results were very good.

Another consideration to do is that you don’t always have to use Natural Language Processing in the bots (in our case in fact we skipped the usage of LUIS ) because, especially if we are not developing a Q&A/support bot, users prefer buttons and minimal amount of info to provide.

Imagine a Bot that handles your Netflix subscription, you just want  buttons like  activate/deactivate membership (if you go in vacation) and the other is “recommendations for tonight” .

Another important thing to consider is Bot Analytics and understand how your bot is performing, I leverage this great tool that under the covers uses Azure Application Insights:

If instead you are in love with statistics you can try this jupyter notebook with the following template to analyze with your custom code the Azure Application Insights metrics and events.

If you want to try the bot already with all the telemetry setup done , you can grab , compile and try the demo code (do not use this code in any production environment) that is available here and if this is your first bot start from this tutorial to understand a bit the various pieces needed.

The Customer Paradigm Executive

Fun article this time ! 

Contact me /comment if you want to apply 😂😂😂 for the role!

Customer Paradigm Executive

Candidates must be able to mesh e-business users and grow proactive networks and evolve 24/7 architectures as well.

This position requires to syndicate innovative infomediaries and morph them with viral metrics in order to synergize back-end convergence.

The objective is to deliver brand wearable eyeballs and architect granular eyeballs deployed to B2B markets incentivizing leading-edge e-business and disintermediating 24/7 relationships.

You will also have to aggregate revolutionary communities and whiteboard end-to-end systems in order to orchestrate dynamic convergence and effectively monetize efficient interfaces that can morph scalable e-markets.

Send Emails with Adobe Campaign Marketing Cloud (Neolane)

Hi this time instead of downloading data from Neolane or updating recipients in it , we want to use Neolane as email cannon leveraging its powerful template engine to manage the email look & feel and deciding on the fly ,using Neolane API, the targets of our mass email campaign.

So the use case is the following : define in Neolane an email template with some fields mapping and leverage Neolane API to send email using this template but defining the email recipients externally and also the contents of the mapped fields .

According to the official Adobe documentation this can be done using the Neolane Business Oriented Apis (we looked into the Data Oriented Apis in our previous articles) as specified here:

https://docs.campaign.adobe.com/doc/AC6.1/en/CFG_API_Business_oriented_APIs.html#SubmitDelivery__nms-delivery_

Unfortunately the documentation is not really clear/complete and I had really to dig inside adobe logs, error codes and soap responses to have this working properly, and here is some sample code that can help you.

The code is made using the sample provided inside the Adobe documentation (external file mapping with data coming from the CDATA section inside the delivery xml tag structure).

Here the c# code:




 string sessionToken = "Look at other Neolane Code samples on how retrieve the session token";
 string securityToken = "Look at the other Neolane Code samples on how retrieve the security token";

string scenario ="Write here the internal name of the delivery template";

 HttpWebRequest reqData = (HttpWebRequest)WebRequest.Create(adobeApiUrl);

 reqData.ContentType = "text/xml; charset=utf-8";
 reqData.Headers.Add("SOAPAction", "nms:delivery#SubmitDelivery");
 reqData.Headers.Add("X-Security-Token", securityToken);
 reqData.Headers.Add("cookie", "__sessiontoken=" + sessionToken);
 reqData.Method = "POST";

 


 string strWriteHeader = "<?xml version='1.0' encoding='ISO-8859-1'?>" +
 "<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:urn=\"urn:nms:delivery\">" +
 "<soapenv:Header/>" + 
 " <soapenv:Body>"+
 "<urn:SubmitDelivery>" +
 "<urn:sessiontoken>" + sessionToken + "</urn:sessiontoken>" +
 " <urn:strScenarioName>" +scenario+ "</urn:strScenarioName>"+
 "<urn:elemContent>";

 string strWriteRecipientBody = "<delivery> " +
   "<targets fromExternalSource=\"true\"> " +
           "<externalSource><![CDATA[MsgId|ClientId|Title|Name|FirstName|Mobile|Email|Market_segment|Product_affinity1|Product_affinity2|Product_affinity3|Product_affinity4|Support_Number|Amount|Threshold " + 
System.Environment.NewLine+ 
"1|000001234|M.|Phulpin|Hervé|0650201020|herve.phulpin@adobe.com|1|A1|A2|A3|A4|E12|120000|100000]]></externalSource>" +
          "</targets> " +
"</delivery>";
 string strWriteFooter = " </urn:elemContent>" +
 "</urn:SubmitDelivery>" +
 "</soapenv:Body>" +
 "</soapenv:Envelope>";

 string bodyData = strWriteHeader + strWriteRecipientBody + strWriteFooter;

 byte[] byteArrayData = Encoding.UTF8.GetBytes(bodyData);

 // Set the ContentLength property of the WebRequest.
 reqData.ContentLength = byteArrayData.Length;
 // Get the request stream.
 Stream dataStreamInputData = reqData.GetRequestStream();
 // Write the data to the request stream.
 dataStreamInputData.Write(byteArrayData, 0, byteArrayData.Length);
 // Close the Stream object.
 dataStreamInputData.Close();

 var responseData = reqData.GetResponse();

 Stream dataStreamData = responseData.GetResponseStream();
 // Open the stream using a StreamReader for easy access.
 StreamReader readerData = new StreamReader(dataStreamData);
 // Read the content.
 string responseFromServerData = readerData.ReadToEnd();

 // Clean up the streams and the response.
 readerData.Close();
 responseData.Close();

return responseFromServerData;

Tips on cloud solutions integration with Azure Logic Apps

The cloud paradigm is a well established reality: old CRM systems are now replaced by salesforce  solutions, HR and finance systems by workday , old exchange servers and SharePoint intranets by office 365, without mentioning entire datacenters migrated to Amazon. At the same time all the efforts that on premise where done to have all these systems to communicate together (the glorious days  of ESB solutions with TIBCO, Websphere, Bea Logic, BizTalk, etc..) seems kind of lost in the new cloud world. Why?

Well each cloud vendor (salesforce first I think) tries to position his own AppStore (AppExchange) with pre-built connectors or apps that quickly allow to companies to have integrations with other cloud platforms in a way that is not only cost effective but most of all supported by app vendor and not by a system integrator (so paid by the license).

Well this in theory should work , in practice we see a lot of good will from niche players on these apps but no or little commitment from big vendors.

Luckily however the best cloud solutions already provide rich and secure APIs to enable integration , it’s only a matter of connecting the dots and here several “integration” cloud vendors are already positioning themselves : Informatica Cloud, Dell, SnapLogic,MuleSoft,etc… ,the Gartner quadrant for Integration platform as a service (iPaaS) represents well the situation.

But while Gartner produced the report on March 2015 , Microsoft released a new kind of app on the azure platform that is called Azure Logic App.

What is the strength of this “service” compared to the others? Well in my opinion is that lies on a “de facto” proven and standard platform that can scale as much as you want and it also gives you the possibility of writing in your own connectors as Azure API apps , finally you can create your integration workflows right in the browser “no client attached!”. Of course it has so many other benefits but for me these three are so compelling that I decided to give it a try and start developing some POCs around it.

What you can do with them? Basically you can connect any system with any other system right from your browser!

Do you want to twit something if a document is uploaded on your dropbox? You can do it!

You want to define a complex workflow that start from a message on the service bus , calls a BAPI on SAP and ends inside a oracle stored procedures? It’s right there!

There is a huge list of connectors that you can use , and each of them can be combined with others in so many interesting ways!

Connectors have actions and triggers! Triggers , as the word says, are useful to react to an event (a new tweet with a specific word , a new lead on salesforce, etc..) and they can be used in a push or pull fashion (I’m interested in this event so the connector will notify me when this happens or I’m interested in this data and I will periodically call the connector to check if there is new data).

Actions are simply methods that can be executed on the connector (send an email, do a query, write a post on facebook,etc…).

An azure logic app is a workflow where you combine all these connectors using triggers and actions to achieve your goals.

How they communicate each other? I mean how do I refer inside a connector B that is linked to A to perform the action using A data? It is super simple! When you link two connectors you will see inside the target one on every action that requires data pick lists where you can easily pick the source data! This can happen because each connector automatically describes its API schema using swagger (this really rocks!).

And you want to know the best of this? If you write your own connector with Visual Studio it will automatically generate the swagger metadata for you! So in really 10 min you can have your brand new connector ready to use !

Added bonus : you can have automatically done for you a testing api made by swagger!

Azure website is full of references to quickly ramp up on this technology , so I want to give you some  useful tips in your app logic journey instead of a full tutorial.

Tip 1:  You will see that published connectors are requesting you some configuration values (Package Settings) and only after that the connector becomes usable in your logic app. I tried with no success to do the same in visual studio with a custom API app and the best that I was able to find is that you simulate this only if you use deploy script from github (look at azuredeploy.json file, some examples here ) , at this stage in fact with the deploy setup screen you can set some configuration values that will never change once your azure api app is published . The way this is done is to map deploy parameters with app settings like this:

"properties": {
"name": "[parameters('siteName')]",
"gatewaySiteName": "[parameters('gatewayName')]",
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('svcPlanName'))]",
 "siteConfig": {
  "appSettings": [
   {
   "name": "clientId",
   "value": "[parameters('MS_client_id')]"
   },
   {
   "name": "clientSecret",
   "value": "[parameters('MS_client_secret')]"
   }
  ]
 }
}

Then you can use the usual ConfigurationManager.Appsettings to read these values into the code.

I guess this will be fixed (possibility of defining package settings) when the publishing on marketplace will be available.

Tip 2: If you store credentials inside your custom api app please note that by default api app are published as public…. so if this particular api app reads from you health IOT device anyone in the world that knows or discovers API address can call this API and read your health data! So set security to internal and not public!

Tip 3: Browser Designer can be sometimes instable and produce not exactly what you were hoping from it, always check also the code view!
Tip 4: Azure API Apps have application settings editable on azure portal like the normal Azure “Web” Apps but they are hidden!
Look at this blog that saved me!

That’s it!