Bot hand off to agent with Salesforce Live Chat Part 2

Hi our previous article we introduced the api calls to send and receive messages to a live agent on Salesforce. Now it’s time to add the bot component and combine bot and live agent to implement the hand off .

For the bot I used one the of frameworks I know better the Microsoft Bot Framework , but some of the concepts can be applied also to other bot solutions.

We start using the Bot Intermediator Sample provided here  , that has already some functionality built in. In particular it uses the bot routing engine that can has been built with the idea of routing conversations between user, bot and agent , creating when needed direct conversations between the user and agent that is actually routed by the bot using this engine.

Let’s see a way that we can use to combine this with salesforce live agent api , we will take some shortcuts and this solution it is not meant to be used in production environment, but hopefully can give you an idea of how you can design a fully fledged solution .

  1. When in the conversation is mentioned the word “human” the intermediator sample triggers the request of intervention of an agent and parks the request inside the database of pending requests of the routing engine . Our addition it has been to define an additional ConcurrentDictionary as in memory storage to store the request and its conversation and add later other properties interesting for us.
  2. Using quartz scheduling engine we can monitor with a recurring job the pending requests of the routing engine , dequeue them starting (always using quartz) an on demand job that opens a connection with live chat , waits that the agent takes the call and binds into to the request the sessionId and the other properties of the LiveChat session opened. This thread can finish here but before we start another on demand thread that is watching any incoming message coming for this request from LiveChat session and routes them to the conversation opened at step 1
  3. In the message controller of the bot, in addition to the default routing rules, we add another rule that checks if the current conversation is “attached” to a live chat session and if yes sends all the chat messages written by the user to the related live chat session.
  4. When the watch live chat session thread does not receive more messages goes in timeout or receives a disconnect/end chat event , it removes the conversation with live chat session from the dictionary and from this moment if the user writes again , he will write to the bot and he wants again to speak with an agent he has to trigger the human “keyword” again.

Here some screenshots:

Chat begins with bot that simply repeats the sentences we write

Screen Shot 2018-03-20 at 9.32.05 PM

Live Agent is ready to handle new calls

Screen Shot 2018-03-20 at 9.32.27 PM

 

Let’s ask for help

Screen Shot 2018-03-20 at 9.35.01 PM

And here the request arrives on live chat

Screen Shot 2018-03-20 at 9.35.15 PM

Once accepted we can start the hand off starting a case in salesforce

Screen Shot 2018-03-20 at 9.35.28 PM

And here we can check if we are taking to a human 🙂

Screen Shot 2018-03-20 at 9.38.56 PM

Screen Shot 2018-03-20 at 9.38.40 PM

In the third and final part we will look inside some code snipplets that show case this functionality and we will describe what can be a good design of the solution if we want to industrialize it.

 

Annunci

Bot hand off to agent with Salesforce Live Chat Part 1

Hi everyone, one of the most requested features into modern implementations is a smooth transition from the automated response system (our lovely bot) to a human.

Our objective in fact is usually the following:

  1. Handle the customer request  first doing a qualification of the request (collect data, ask additional information)
  2. Now it can happen that the request can be handled with simple and repetitive solution and bot should exactly cover this scenario
  3. It can also happen that the request is so complex that can be handled only by a call center operator but we will make good usage of the operator’s time because he will be involved in an activity where he can bring a distinctive value

One the most used Call Center modules for human assistance on a case is Salesforce Live Chat and it makes sense to understand how we can make a transition from any bot implementation to Live Chat without requesting the customer to change UI, transition to another web page and more importantly to re-type all the information he wrote at the qualification state (so assuming that the triage has been done in the bot application we want to bring the entire conversation state from the bot to the live agent attention).

en-us95f47444a60dc1ae85cbea67423f8b5f

Let’s start with the basics and see the “how to” from the beginning:

First you need a salesforce developer sandbox for your testing , you can request one for free here.

Once you have your sandbox you have to enable the live agent functionality, following the steps described here , please pay attention to each step and your last step should be this one .

You can try if everything works just creating a sample html page with javascript created by the buttons functionality and the deployment one (remember to put the deployment javascript at the end of the page before the closing body tag!).

If you want an unofficial guide to help you more check also this blog  or this other blog .

At this point you should have your live chat working nicely and we can now proceed to study the salesforce live agent rest api that allow us to us the live chat functionality programmatically.

If you look a bit to how the API works you will soon notice that this API has been design to be consumed mainly directly by final clients (web pages or mobile apps) while it lacks some Server to Server functionality like web-hooks , so in a nutshell it is very helpful if you want to build a branded web page or IOS/Android app for call center support but it a bit less helpful to use it for transitioning a conversation from a server application (our bot).

In order to use the api we need some info: your Salesforce Organization Id ,  your live agent deploymentId , live agent buttonId and finally the live agent api endpoint.

You can find this info here and in this guide.

Ok now can finally start with some coding 🙂 , I will use c# (running from a Mac) so I guess it can run on any platform .

First we need to do our first rest call to retrive the session ID for the new session, the session key for the new session, the affinity token for the session that’s passed in the header for all future requests and finally the clientPollTimeout that represents the number of seconds before you must make a Messages request before your Messages long polling loop times out and is terminated (we will understand this better later):

 private static async Task<ChatObj> createSession()
         {
             string sessionEndpoint = liveAgentEndPoint + liveAgentSessionRelativePath;
             HttpClient client = new HttpClient();
         client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-API-VERSION", liveAgentApiVersion);
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-AFFINITY", "null");
             HttpResponseMessage response = await client.GetAsync(sessionEndpoint);
             JObject jObj = new JObject();
             if (response.IsSuccessStatusCode)
             {
                 string resp = await response.Content.ReadAsStringAsync();
                 jObj = JObject.Parse(resp);

            }
             response.Dispose();
             ChatObj chatObj = new ChatObj();
             chatObj.setSessionId((String)jObj.GetValue("id"));
             chatObj.setAffinityToken((String)jObj.GetValue("affinityToken"));
             chatObj.setSessionKey((String)jObj.GetValue("key"));
             chatObj.setButtonId(liveAgentButtonId);
             chatObj.setSequence(1);
             client.Dispose();
             return chatObj;
         }

Now that we have this information we can actually say to the live agent that we would like to start a chat session with him (!) and this requires another api call to request a chat visitor session and this session will be actually opened only when the live agent accepts the request into the salesforce console.

So first we do the request:

  private static async Task createChatRequest(ChatBag chatObj)
         {
             
             HttpClient client = new HttpClient();
             client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-API-VERSION", liveAgentApiVersion);
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-AFFINITY", chatObj.getAffinityToken());
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-SESSION-KEY", chatObj.getSessionKey());
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-SEQUENCE", "1");
             JObject body = new JObject();
             body.Add(new JProperty("organizationId", liveAgentOrgId));
             body.Add(new JProperty("deploymentId", liveAgentDeploymentId));
             body.Add(new JProperty("buttonId", liveAgentButtonId));
             body.Add(new JProperty("sessionId", chatObj.getSessionId()));
             body.Add(new JProperty("trackingId", ""));
             body.Add(new JProperty("userAgent", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36"));
             body.Add(new JProperty("language", "en-US"));
             body.Add(new JProperty("screenResolution", "1440x900"));
             body.Add(new JProperty("visitorName", "ConsoleTest"));
             body.Add(new JProperty("prechatDetails", new List<String>()));
             body.Add(new JProperty("receiveQueueUpdates", true));
             body.Add(new JProperty("prechatEntities", new List<String>()));
             body.Add(new JProperty("isPost", true));
             StringContent cnt = new StringContent(body.ToString(), Encoding.UTF8, "application/json");
             HttpResponseMessage response = await client.PostAsync(liveAgentEndPoint + liveAgentChasitorRelativePath, cnt);
             if (response.IsSuccessStatusCode)
             {
                 string responseText = await response.Content.ReadAsStringAsync();
             }
             response.Dispose();
             client.Dispose();

        }

If everything went right we should receive an “OK” as response while we wait for the operator to actually accept the visitor session request.

An important thing to notice is that the API supports prechatDetails and prechatEntities objects that we can use to bring with us the conversation data that the customer had with the bot , so the live agent can look at this info and immediately help the customer with the right context without re-asking the same questions.

Since the process of approval to start the chat is not automatic but we have to wait for the live agent to accept, at this stage we have just to poll the Message api and wait for having the confirmation using a thread that calls the api in this way:

  private static async Task<ChatMessageResponse> receiveMessages(ChatBag chatObj)
         {
             ChatMessageResponse jObj = new ChatMessageResponse();
             HttpClient client = new HttpClient();
             client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-API-VERSION", liveAgentApiVersion);
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-AFFINITY", chatObj.getAffinityToken());
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-SESSION-KEY", chatObj.getSessionKey());

            HttpResponseMessage response = await client.GetAsync(liveAgentEndPoint + liveAgentMessagesRelativePath);
             if (response.IsSuccessStatusCode)
             {
                 string respText = await responseContent.ReadAsStringAsync();
                 jObj = JsonConvert.DeserializeObject<ChatMessageResponse>(respText);

                 if (jObj!=null)
                   {
                     var msgs = from x in jObj.messages
                                                             where x.type == "ChatRequestSuccess"
                                    select x;
                     foreach (Messages activity in msgs)
                     {
                         Console.WriteLine("VisitorId: " +activity.message.visitorId);
                     }
                     
                 }
             }
             response.Dispose();
             client.Dispose();
             return jObj;
         }

Ok so when we receive the ChatRequestSuccess Type message, this means that chat request was successful and routed to available agents .

To be completely sure that an agent really accepted our conversation we have to wait for the ChatEnstablished Type message where we can also read the name and the id of the agent answering us.

Ok now we can finally send an “Hello Mr Agent!” text to our Live Agent with this api:

  private static async Task sendTxtMessage(ChatBag chatObj,string textToSend)
         {
             
             HttpClient client = new HttpClient();
             client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-API-VERSION", liveAgentApiVersion);
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-AFFINITY", chatObj.getAffinityToken());
             client.DefaultRequestHeaders.Add("X-LIVEAGENT-SESSION-KEY", chatObj.getSessionKey());
             JObject bodyT = new JObject();
             bodyT.Add(new JProperty("text", textToSend));
             StringContent cnt = new StringContent(bodyT.ToString(), Encoding.UTF8, "application/json");
             HttpResponseMessage response = await client.PostAsync(liveAgentEndPoint + liveAgentChasitorChatRelativePath, cnt);
             if (response.IsSuccessStatusCode)
             {
                 string respText = await responseStep2.Content.ReadAsStringAsync();
             }
             response.Dispose();
             client.Dispose();
         }

And we can receive the replies of the agent always using the same receive message polling technique but this time searching for  ChatMessage  type kind of messages.

In the next part of the article I will go through the integration with a bot and attempt to see how we can implement the hand off !

How to create the perfect Matchmaker with Bot Framework and Cognitive Services

Hi everyone, this time I wanted to showcase some of the many capabilities of Microsoft Cognitive Services using a “cupido”   bot built with Microsoft Bot Framework .

So what is the plan? Here some ideas:

  • Leverage only Facebook as channel! Why? Because with facebook you have people already “logged in” and you can leverage the messenger profile api to retrieve automatically the user details and more importantly his facebook photo!
  • Since usually the facebook photo is an image with a face , we can use this image with Vision and Face Api to understand gender, age and bunch of other interesting info without any user interaction!
  • We can score with a custom vision model that we trained using some publicly available images if a person looks like a super model or not 😉
  • Using all this info (age, gender, makeup, sunglasses, super model or not, hair color, etc…) collected with all those calls we can decide which candidates inside our database are the right ones for our user and display the ones that are fitting according to our demo rules.

Of course at the beginning our database of profiles will be empty , but with help of friends / colleagues we can quickly fill it and have fun during the demo.

So in practice how does it look like?

Here the first interaction, after saying hello the bot immediately personalizes the experience with our facebook data (foto, name, last name,etc..) and asks if we want to participate to the experiment:

After accepting it uses the described APIs to understand the image and calculate age, hair, super model score, etc…

Yeah, I know my super model score is not really good, but let’s see if there are any matches for me anyway….

Of course the bot is smart enough to display the profile of my wife otherwise I was really in a big problem :-).

Now I guess many of you have this question: how the super model score is calculated?

Well I trained the custom vision service of Microsoft with 30+ photos of real models and 30+ photos of “normal people” and after 4 iterations I had already a 90% accuracy on detecting super models in photos 😉

Of course there are several things to consider here:

  1. Images should be the focus of the picture
  2. have sufficiently diverse images, angles, lighting, and backgrounds
  3. Train with images that are similar (in quality) to the images that will be used in scoring

And we have for sure super model pics that have larger resolution, better lighting and good exposure vs the photos of “normal” people like you and me, but for the purposes of this demo the results were very good.

Another consideration to do is that you don’t always have to use Natural Language Processing in the bots (in our case in fact we skipped the usage of LUIS ) because, especially if we are not developing a Q&A/support bot, users prefer buttons and minimal amount of info to provide.

Imagine a Bot that handles your Netflix subscription, you just want  buttons like  activate/deactivate membership (if you go in vacation) and the other is “recommendations for tonight” .

Another important thing to consider is Bot Analytics and understand how your bot is performing, I leverage this great tool that under the covers uses Azure Application Insights:

If instead you are in love with statistics you can try this jupyter notebook with the following template to analyze with your custom code the Azure Application Insights metrics and events.

If you want to try the bot already with all the telemetry setup done , you can grab , compile and try the demo code (do not use this code in any production environment) that is available here and if this is your first bot start from this tutorial to understand a bit the various pieces needed.

My Top 2 Microsoft Build 2017 Sessions

Let’s start with Number 1 this is the Visionary Cloud that is arriving , compute nodes combined with FPGA neurons that act as Hardware Micro services communicating and changing their internal code directly attached to the Azure Network , like a global neural network. Do you want to know more? Click here and check directly with the new video index AI the content of this presentation jumping directly on the portions of the video that you like, searching words, concepts and people appearing in the video.

We can then look here at Number 2 (go to 1:01:54) :

Matt Velloso teaching to a Robot (Zenbo) how to recognize the images the robot sees using Microsoft Bot Framework and the new Custom Image Recognition Service.

Do you want to explore more?

Go here at channel9 and have fun exploring all the massive updates that has been released!