UniFi – Install a UniFi Cloud Controller on Google Cloud Platform Compute Engine

Let’s see this time how to set up the UniFi Controller software on GCP with very simple steps.

Step 1 : Go the following website https://console.cloud.google.com/ and register . You will receive 300$ of Google credits that can be used in the first 12 month, but more importantly the free tier  !

Step 2: Once you have your account up and running you can provision a Linux Instance clicking on the big “Compute Engine” and VM instances

Step 3: After you selected the Virtual Machine just give to it a name,, choose a data center near to where you live, pick as size the micro (free!), as OS pick Ubuntu 16.04

Step 4: Set the a network tag for this Instance it will be used later and set ssh keys if needed ( you can do everything with the web ssh console without having to specify this)

Step 5: In the additional settings leave everything to the default values and finally hit the create button!

Step 6: After few seconds your Instance is ready and you should be able to see it running , write down the Public IP Address because you will need it shortly.

Step 7: Now we have to setup the open ports in order to have the Controller working correctly.

First go the the VPC network tab of your account and select Firewall Rules:

Here add a firewall rule specific for your controller instance using the Target tag that we defined early, put 0.0.0.0/0 as IP range to allow connections from any IP and set those ports to be open:

tcp:8443;tcp:8080;tcp:8843;tcp:8880;tcp:6789;udp:3478 .  

Here a screenshot of those settings:

Step 8: Connect with the web console ssh and install the Unifi controller software with those commands:

echo “deb http://www.ubnt.com/downloads/unifi/debian unifi5 ubiquiti” | sudo tee -a /etc/apt/sources.list

sudo apt-key adv –keyserver keyserver.ubuntu.com –recv 06E85760C0A52C50

sudo apt-get update

sudo apt-get install unifi

Step 9: Connect to the controller web interface located here https://IP_Address:8443/ and complete the UniFi wizard:

Finally you may now proceed to adopt your UniFi devices using Layer 3 Adoption !

Annunci

Automated Machine Learning with H2O Driverless AI

Hi everyone , this time I want to evaluate another automated machine learning tool called H2O Driverless AI and also compare it with DataRobot (of course a very lightweight type of comparison analysis has been done).

First great feature of H2O driverless AI is that you can have it instantly (almost) as long you have an Amazon, Google or Azure account you can spin up a H2O Driverless quite easily as described here  :

azure_vm_size

You can choose if you want to Bring your own license (you can ask an evaluation of 21 days as I did) or pay the cost of the license inside the hourly costs of the VM in your cloud provider.

Once you have your VM up & running, my suggestion is to update it to the latest docker image of H2O Driverless AI as described into the how to

sudo h2oai update

and then connect to the UI.

Once connected you can upload directly from the UI the datasets you like and perform ML experiments with them choosing which column you want predict/analyze and what is the metric you want to measure your model (AUC, etc..).

Here one running on 4 GPUs:

Screen Shot 2018-03-22 at 9.23.38 AM

Once experiment is finished with the interpreting model page you can understand the key influencers in your dataset for the target column you were interested to analyze/predict.

viewing_results

Since few days ago I did some test with DataRobot and Kaggle competitions I tried to perform the same on H2O and the results are….

Titanic Competition (metric accuracy –> higher better):

DataRobot 0.79904 (Best)

H2O 0.78947

House Prices Regression (metric RMSE –> lower better):

DataRobot 0.12566 (Best)

H2O 0.13378

As you can see on both DataRobot leads but the results of H2O are not so far away !

Talking instead of model understanding and explicability of the results in “human” terms I see DataRobot offering more different and meaningful visualizations than H2O, additionally you can decide by yourself which of the many models you want to use and not only the winning one (there are cases where you want to use one that is less accurate but that has a higher evaluation (inference) speed), while with H2O you have no choice than using the only one surviving to the process of automatic evaluation.

H2O however is more accessible in terms of testing/trying , it offers GPU acceleration that is a very nice bonus especially on large datasets .

Happy Auto ML !

Extending UniFi Data Analysis & Reports

Hi everyone, this time I want to share one of my preferred side activities : playing with my Ubiquiti home setup!

As you already know I have my controller running on Azure , but I wanted to understand more on which kind of data is stored inside the controller, in other words where the data that we see in the controller dashboard is stored.

2015-09-16_6-48-19

Inspecting the binaries and looking in 2-3 posts on the forums I figured out that this data is sitting in a mongodb database, but I wanted of course to look a bit inside of this database.

What I did is the following, I made a backup of the data of the controller using the web interface of the controller and I downloaded it locally:

screen_shot_2016-08-23_at_09-25-34

At this stage I downloaded the controller software for an installation on my laptop (Macbook) and at controller startup i requested a restore of the backup i just downloaded from the cloud controller.

restore_setup

Once the restore is done, mantain the controller running and you can use a mongodb client like robo 3T  and connect to localhost and port 27117 (we connect to the mongod process started by the controller locally).

Screen Shot 2018-05-02 at 9.22.56 PM

This is great! But I would like to produce some nice dashboards , with some visualization tool like Tableau or PowerBi or simply Excel but the data in a “Document” format while I need it to be in Table/Records format.

The solution is the Mongo Bi Connector that is a kind of “wrapper” or “translator” between the “Document” world and the tables/records world.

But things are never simple ;-), this connector works only from MongoDB v. 3.0 or higher while the one inside the controller software is 2.6. So first we have to download a separate mongodb server that works with it but more importantly upgrade the database itself to the 3.X format.

First let’s copy the database from the controller folder (check a folder called db) and copy it to another location, write down this location.

I tested and failed various times before understanding how to do it but this is the sequence (using brew to install mongodb on my mac):

install mongodb 3.0 –> open the controller database in the location we copied.

uninstall 3.0 /install 3.2 –> open the controller database in the location we copied.

uninstall 3.2 /install 3.4 –> open the controller database in the location we copied.

This will bring the database to a format that is working with the Bi Connector.

Screen Shot 2018-05-02 at 5.55.20 PM

Now in the Bi Connector you can extract the schema of a document collection you like (for example the stat_daily collection of ace_stat database) and after that spawn the wrapper process that can be used by a visualization tool:

Screen Shot 2018-05-02 at 9.38.10 PM

In my case I used tableau to create some test dashboards:

Screen Shot 2018-05-02 at 4.28.23 PM

Here I see that the CPU of my gateway was a bit high during the first part of the month and then decreased significantly.

I can add other metrics like downloaded data, etc.. to understand better:

Screen Shot 2018-05-02 at 4.30.13 PM

In reality in this specific case there is already a super nice visualization already offered by the controller dashboards:

Screen Shot 2018-05-02 at 5.54.18 PM

So the real interesting thing here is that you can actually create your own report and also discover new insights looking at the your own network data

So what are you waiting for ? Happy custom reports on your Unifi network and device data!

Automatic Machine Learning with DataRobot

Hi this time I want to share with you my experimentations with a DataRobot , an automated machine learning software that has promised to help to leverage machine learning techniques with few clicks of mouse .

Let’s see it in action with a very simple dataset, the so called Titanic: Machine Learning from disaster competition on Kaggle (Extract):

“The sinking of the RMS Titanic is one of the most infamous shipwrecks in history.  On April 15, 1912, during her maiden voyage, the Titanic sank after colliding with an iceberg, killing 1502 out of 2224 passengers and crew. This sensational tragedy shocked the international community and led to better safety regulations for ships.

In this challenge, we ask you to complete the analysis of what sorts of people were likely to survive. In particular, we ask you to apply the tools of machine learning to predict which passengers survived the tragedy.”

The data looks like this:

Screen Shot 2018-04-24 at 9.21.21 PM

and we can see that for each passenger we have some data (Name, Sex, Age,  Port of Embark, Ticket Class) and the indication if this passenger is survived to the Titanic sank in the Survived column. Our objective is to discover as much as possible inside all the information we have about passengers if there are and what are the key influencers of the survival .

The interface of DataRobot is very simple at the start, asking us to upload the data we have: Screen Shot 2018-04-24 at 9.33.03 PM

Once the upload is done DataRobot asks us which is the column we want to “predict” :

Screen Shot 2018-04-24 at 9.35.49 PM

and once we select our target column we can press the big “Start” button .

Here DataRobot analyzes our file and calculates how many models are suitable for this data and once done that start automatically “training” all those models in parallel according to the process power we select (“workers” setting).

Screen Shot 2018-04-24 at 9.38.15 PM

Screen Shot 2018-04-24 at 9.41.30 PM

Once all the models are trained on the leaderboard we can find at first place the best model possible according to the metric that DataRobot picks for the problem we are trying to solve.

Once we have selected the “best model” we can understand what are the key findings like those ones:

Screen Shot 2018-04-24 at 9.51.48 PM

Screen Shot 2018-04-24 at 10.10.09 PM

in other words females in first class had high chances of survival while men in third class were really at risk of not surviving.

Using the predict feature of Data Robot we tested with an external kaggle “test” file the accuracy of this model uploading the predictions obtained by Data Robot to Kaggle and here is the result:

Screen Shot 2018-04-24 at 9.55.48 PM

which is absolutely not bad , because given the fact that 9408 data scientists participated to this competition , this means I am in the top 18% globally!

The pros: I did not touch the data like adding more features, normalizing the data, removing columns like IDs , etc…, the data was analyzed by DataRobot as is.  I used all default settings without touching any “advanced option”.

The cons: DataRobot misses a data preparation functionality (you can try products like  Alteryx or Trifacta in combination with DataRobot) and this means that we have to at least use two products to manage end to end a data science experiment that involves typically operations like joins of multiple tables, files, complex aggregations, sub queries, etc..

Finally while we have to absolutely admit that 80% of data science experimentation job is around data collection, source access, data preparation, cleaning, etc.., at the same time DataRobot can unlock several quick wins and opportunities in all the IT departments where several analysts /developers are really expert in those activities , they have the business knowledge of the data but they lack data science abilities .

Bot hand off to agent with Salesforce Live Chat Part 2

Hi our previous article we introduced the api calls to send and receive messages to a live agent on Salesforce. Now it’s time to add the bot component and combine bot and live agent to implement the hand off .

For the bot I used one the of frameworks I know better the Microsoft Bot Framework , but some of the concepts can be applied also to other bot solutions.

We start using the Bot Intermediator Sample provided here  , that has already some functionality built in. In particular it uses the bot routing engine that can has been built with the idea of routing conversations between user, bot and agent , creating when needed direct conversations between the user and agent that is actually routed by the bot using this engine.

Let’s see a way that we can use to combine this with salesforce live agent api , we will take some shortcuts and this solution it is not meant to be used in production environment, but hopefully can give you an idea of how you can design a fully fledged solution .

  1. When in the conversation is mentioned the word “human” the intermediator sample triggers the request of intervention of an agent and parks the request inside the database of pending requests of the routing engine . Our addition it has been to define an additional ConcurrentDictionary as in memory storage to store the request and its conversation and add later other properties interesting for us.
  2. Using quartz scheduling engine we can monitor with a recurring job the pending requests of the routing engine , dequeue them starting (always using quartz) an on demand job that opens a connection with live chat , waits that the agent takes the call and binds into to the request the sessionId and the other properties of the LiveChat session opened. This thread can finish here but before we start another on demand thread that is watching any incoming message coming for this request from LiveChat session and routes them to the conversation opened at step 1
  3. In the message controller of the bot, in addition to the default routing rules, we add another rule that checks if the current conversation is “attached” to a live chat session and if yes sends all the chat messages written by the user to the related live chat session.
  4. When the watch live chat session thread does not receive more messages goes in timeout or receives a disconnect/end chat event , it removes the conversation with live chat session from the dictionary and from this moment if the user writes again , he will write to the bot and he wants again to speak with an agent he has to trigger the human “keyword” again.

Here some screenshots:

Chat begins with bot that simply repeats the sentences we write

Screen Shot 2018-03-20 at 9.32.05 PM

Live Agent is ready to handle new calls

Screen Shot 2018-03-20 at 9.32.27 PM

 

Let’s ask for help

Screen Shot 2018-03-20 at 9.35.01 PM

And here the request arrives on live chat

Screen Shot 2018-03-20 at 9.35.15 PM

Once accepted we can start the hand off starting a case in salesforce

Screen Shot 2018-03-20 at 9.35.28 PM

And here we can check if we are taking to a human 🙂

Screen Shot 2018-03-20 at 9.38.56 PM

Screen Shot 2018-03-20 at 9.38.40 PM

In the third and final part we will look inside some code snipplets that show case this functionality and we will describe what can be a good design of the solution if we want to industrialize it.

 

Bot hand off to agent with Salesforce Live Chat Part 1

Hi everyone, one of the most requested features into modern implementations is a smooth transition from the automated response system (our lovely bot) to a human.

Our objective in fact is usually the following:

  1. Handle the customer request  first doing a qualification of the request (collect data, ask additional information)
  2. Now it can happen that the request can be handled with simple and repetitive solution and bot should exactly cover this scenario
  3. It can also happen that the request is so complex that can be handled only by a call center operator but we will make good usage of the operator’s time because he will be involved in an activity where he can bring a distinctive value

One the most used Call Center modules for human assistance on a case is Salesforce Live Chat and it makes sense to understand how we can make a transition from any bot implementation to Live Chat without requesting the customer to change UI, transition to another web page and more importantly to re-type all the information he wrote at the qualification state (so assuming that the triage has been done in the bot application we want to bring the entire conversation state from the bot to the live agent attention).

en-us95f47444a60dc1ae85cbea67423f8b5f

Let’s start with the basics and see the “how to” from the beginning:

First you need a salesforce developer sandbox for your testing , you can request one for free here.

Once you have your sandbox you have to enable the live agent functionality, following the steps described here , please pay attention to each step and your last step should be this one .

You can try if everything works just creating a sample html page with javascript created by the buttons functionality and the deployment one (remember to put the deployment javascript at the end of the page before the closing body tag!).

If you want an unofficial guide to help you more check also this blog  or this other blog .

At this point you should have your live chat working nicely and we can now proceed to study the salesforce live agent rest api that allow us to us the live chat functionality programmatically.

If you look a bit to how the API works you will soon notice that this API has been design to be consumed mainly directly by final clients (web pages or mobile apps) while it lacks some Server to Server functionality like web-hooks , so in a nutshell it is very helpful if you want to build a branded web page or IOS/Android app for call center support but it a bit less helpful to use it for transitioning a conversation from a server application (our bot).

In order to use the api we need some info: your Salesforce Organization Id ,  your live agent deploymentId , live agent buttonId and finally the live agent api endpoint.

You can find this info here and in this guide.

Ok now can finally start with some coding 🙂 , I will use c# (running from a Mac) so I guess it can run on any platform .

First we need to do our first rest call to retrive the session ID for the new session, the session key for the new session, the affinity token for the session that’s passed in the header for all future requests and finally the clientPollTimeout that represents the number of seconds before you must make a Messages request before your Messages long polling loop times out and is terminated (we will understand this better later):

 private static async Task<ChatObj> createSession()
         {
             string sessionEndpoint = liveAgentEndPoint + liveAgentSessionRelativePath;
             HttpClient client = new HttpClient();
         client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-API-VERSION", liveAgentApiVersion);
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-AFFINITY", "null");
             HttpResponseMessage response = await client.GetAsync(sessionEndpoint);
             JObject jObj = new JObject();
             if (response.IsSuccessStatusCode)
             {
                 string resp = await response.Content.ReadAsStringAsync();
                 jObj = JObject.Parse(resp);

            }
             response.Dispose();
             ChatObj chatObj = new ChatObj();
             chatObj.setSessionId((String)jObj.GetValue("id"));
             chatObj.setAffinityToken((String)jObj.GetValue("affinityToken"));
             chatObj.setSessionKey((String)jObj.GetValue("key"));
             chatObj.setButtonId(liveAgentButtonId);
             chatObj.setSequence(1);
             client.Dispose();
             return chatObj;
         }

Now that we have this information we can actually say to the live agent that we would like to start a chat session with him (!) and this requires another api call to request a chat visitor session and this session will be actually opened only when the live agent accepts the request into the salesforce console.

So first we do the request:

  private static async Task createChatRequest(ChatBag chatObj)
         {
             
             HttpClient client = new HttpClient();
             client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-API-VERSION", liveAgentApiVersion);
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-AFFINITY", chatObj.getAffinityToken());
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-SESSION-KEY", chatObj.getSessionKey());
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-SEQUENCE", "1");
             JObject body = new JObject();
             body.Add(new JProperty("organizationId", liveAgentOrgId));
             body.Add(new JProperty("deploymentId", liveAgentDeploymentId));
             body.Add(new JProperty("buttonId", liveAgentButtonId));
             body.Add(new JProperty("sessionId", chatObj.getSessionId()));
             body.Add(new JProperty("trackingId", ""));
             body.Add(new JProperty("userAgent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"));
             body.Add(new JProperty("language", "en-US"));
             body.Add(new JProperty("screenResolution", "1440x900"));
             body.Add(new JProperty("visitorName", "ConsoleTest"));
             body.Add(new JProperty("prechatDetails", new List<String>()));
             body.Add(new JProperty("receiveQueueUpdates", true));
             body.Add(new JProperty("prechatEntities", new List<String>()));
             body.Add(new JProperty("isPost", true));
             StringContent cnt = new StringContent(body.ToString(), Encoding.UTF8, "application/json");
             HttpResponseMessage response = await client.PostAsync(liveAgentEndPoint + liveAgentChasitorRelativePath, cnt);
             if (response.IsSuccessStatusCode)
             {
                 string responseText = await response.Content.ReadAsStringAsync();
             }
             response.Dispose();
             client.Dispose();

        }

If everything went right we should receive an “OK” as response while we wait for the operator to actually accept the visitor session request.

An important thing to notice is that the API supports prechatDetails and prechatEntities objects that we can use to bring with us the conversation data that the customer had with the bot , so the live agent can look at this info and immediately help the customer with the right context without re-asking the same questions.

Since the process of approval to start the chat is not automatic but we have to wait for the live agent to accept, at this stage we have just to poll the Message api and wait for having the confirmation using a thread that calls the api in this way:

  private static async Task<ChatMessageResponse> receiveMessages(ChatBag chatObj)
         {
             ChatMessageResponse jObj = new ChatMessageResponse();
             HttpClient client = new HttpClient();
             client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-API-VERSION", liveAgentApiVersion);
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-AFFINITY", chatObj.getAffinityToken());
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-SESSION-KEY", chatObj.getSessionKey());

            HttpResponseMessage response = await client.GetAsync(liveAgentEndPoint + liveAgentMessagesRelativePath);
             if (response.IsSuccessStatusCode)
             {
                 string respText = await responseContent.ReadAsStringAsync();
                 jObj = JsonConvert.DeserializeObject<ChatMessageResponse>(respText);

                 if (jObj!=null)
                   {
                     var msgs = from x in jObj.messages
                                                             where x.type == "ChatRequestSuccess"
                                    select x;
                     foreach (Messages activity in msgs)
                     {
                         Console.WriteLine("VisitorId: " +activity.message.visitorId);
                     }
                     
                 }
             }
             response.Dispose();
             client.Dispose();
             return jObj;
         }

Ok so when we receive the ChatRequestSuccess Type message, this means that chat request was successful and routed to available agents .

To be completely sure that an agent really accepted our conversation we have to wait for the ChatEnstablished Type message where we can also read the name and the id of the agent answering us.

Ok now we can finally send an “Hello Mr Agent!” text to our Live Agent with this api:

  private static async Task sendTxtMessage(ChatBag chatObj,string textToSend)
         {
             
             HttpClient client = new HttpClient();
             client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-API-VERSION", liveAgentApiVersion);
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-AFFINITY", chatObj.getAffinityToken());
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-SESSION-KEY", chatObj.getSessionKey());
             JObject bodyT = new JObject();
             bodyT.Add(new JProperty("text", textToSend));
             StringContent cnt = new StringContent(bodyT.ToString(), Encoding.UTF8, "application/json");
             HttpResponseMessage response = await client.PostAsync(liveAgentEndPoint + liveAgentChasitorChatRelativePath, cnt);
             if (response.IsSuccessStatusCode)
             {
                 string respText = await responseStep2.Content.ReadAsStringAsync();
             }
             response.Dispose();
             client.Dispose();
         }

And we can receive the replies of the agent always using the same receive message polling technique but this time searching for  ChatMessage  type kind of messages.

In the next part of the article I will go through the integration with a bot and attempt to see how we can implement the hand off !

Text Analytics with Facebook FastText

I recently I had to work a very interesting challenge, imagine a very large collection of phrases/statements and you want to derive from each of them the key ask/term .

An example can be a large collection of emails, scan the subjects and understand what was the key topic of the email, so for example an email with the following subject : “Dear dad the printer is no more working….” we can say that the key is “printer“.

can27t

If you try to apply generic key phrases algorithms , they can work pretty well in a very general context, but if your context is specific they will miss several key terms that are part of the dictionary of your context.

897

I successfully used facebook fasttext for this supervised classification task, and here is what you need to make it work :

  1. A Virtual Machine Ubuntu Linux 16.04 (is free on a Macbook with Parallels Lite)
  2. Download and compile fast text as described here
  3. A training set with statements and corresponding labels
  4. A validation set with statements and corresponding labels

So yes you need to manually label a good amount of statements to make fasttext “understand” well your dataset. 

Of course you can speed up the process transforming each statement into an array of the words and massively assign those labels where it makes sense ( python pandas or just old SQL can help you here).

Let’s see how to build the training set:

create a file called train.csv and here insert each statement in a line in the following format __label__labelname here write your statement.

Let’s make an example with the email subject used before:

__label__printer Dear dad the printer is no more working

You can have also multiple labels for the same statement, let’s see this example:

__label__printer __label__wifi Dear dad the printer and the wifi are dead

The validation set can be another file called validation.csv filled exactly in the same way, and of course you have to follow the usual good practices to split correctly your labeled dataset into the training dataset and validation dataset.

In order to start the training with fasttext you have to type the following command:

./fasttext supervised -input data/train.csv -output result/model_1 -lr 1.0 -epoch 25 -wordNgrams 2

this assumes that you are with terminal inside the fasttext folder and the training file is inside a subfolder called data , while the resulting model will be saved in a result folder.

I added also some other optional arguments to improve the precision in my specific case, you can look at those options here.

Once the training is done (you will understand why is called fasttext here!) , you can check the precision of the model in this way:

./fasttext test result/model_1.bin data/valid.csv

In my case I obtained a good 83% 🙂

aaeaaqaaaaaaaaitaaaajde5ngnjngm3lwqwnwmtndm5yi04nzfmlta1mdeyngfjmzuxzg

If you want to test your model manually (typing sentences and obtaining the corresponding labels) , you can try the following command:

./fasttext predict result/model_1.bin –

Fasttext has also python wrappers , like this one I used and you can leverage this wrapper to perform a massive scoring like I did here:


from fastText import load_model
import pandas as pd
fastm=load_model('/result/model_1.bin')
k = 1
df=pd.read_csv("data/datatoscore.csv")
df.insert(2,'label','')
for index, row in df.iterrows():
labels, probabilities = fastm.predict(str(row["short_statement"]), k)
for w, f in zip(labels, probabilities):
row["label"]=w
df.to_csv("data/finalresult.csv")

You can improve the entire process in many different ways, for example you can use the unsupervised training to obtain word vectors for you dataset , use this “dictionary” as base for your list of labels and use the nearest neighbor to find similar words that can grouped into single labels when doing the supervised training.

fb