Bot hand off to agent with Salesforce Live Chat Part 2

Hi our previous article we introduced the api calls to send and receive messages to a live agent on Salesforce. Now it’s time to add the bot component and combine bot and live agent to implement the hand off .

For the bot I used one the of frameworks I know better the Microsoft Bot Framework , but some of the concepts can be applied also to other bot solutions.

We start using the Bot Intermediator Sample provided here  , that has already some functionality built in. In particular it uses the bot routing engine that can has been built with the idea of routing conversations between user, bot and agent , creating when needed direct conversations between the user and agent that is actually routed by the bot using this engine.

Let’s see a way that we can use to combine this with salesforce live agent api , we will take some shortcuts and this solution it is not meant to be used in production environment, but hopefully can give you an idea of how you can design a fully fledged solution .

  1. When in the conversation is mentioned the word “human” the intermediator sample triggers the request of intervention of an agent and parks the request inside the database of pending requests of the routing engine . Our addition it has been to define an additional ConcurrentDictionary as in memory storage to store the request and its conversation and add later other properties interesting for us.
  2. Using quartz scheduling engine we can monitor with a recurring job the pending requests of the routing engine , dequeue them starting (always using quartz) an on demand job that opens a connection with live chat , waits that the agent takes the call and binds into to the request the sessionId and the other properties of the LiveChat session opened. This thread can finish here but before we start another on demand thread that is watching any incoming message coming for this request from LiveChat session and routes them to the conversation opened at step 1
  3. In the message controller of the bot, in addition to the default routing rules, we add another rule that checks if the current conversation is “attached” to a live chat session and if yes sends all the chat messages written by the user to the related live chat session.
  4. When the watch live chat session thread does not receive more messages goes in timeout or receives a disconnect/end chat event , it removes the conversation with live chat session from the dictionary and from this moment if the user writes again , he will write to the bot and he wants again to speak with an agent he has to trigger the human “keyword” again.

Here some screenshots:

Chat begins with bot that simply repeats the sentences we write

Screen Shot 2018-03-20 at 9.32.05 PM

Live Agent is ready to handle new calls

Screen Shot 2018-03-20 at 9.32.27 PM

 

Let’s ask for help

Screen Shot 2018-03-20 at 9.35.01 PM

And here the request arrives on live chat

Screen Shot 2018-03-20 at 9.35.15 PM

Once accepted we can start the hand off starting a case in salesforce

Screen Shot 2018-03-20 at 9.35.28 PM

And here we can check if we are taking to a human 🙂

Screen Shot 2018-03-20 at 9.38.56 PM

Screen Shot 2018-03-20 at 9.38.40 PM

In the third and final part we will look inside some code snipplets that show case this functionality and we will describe what can be a good design of the solution if we want to industrialize it.

 

Annunci

Let’s dig in our email!

As many of you, even if we are almost in 2018, I still work A LOT using emails and recently I was asking myself the following question what if I can leverage analytics and also machine learning to have a better understanding of my emails?

text-analytics

Here is a quick way to understand who is inspiring you more

positive-attitude

and who are instead the ones spreading a bit more negativity in your daily job 🙂

workplace-negativity-56a0f2bc5f9b58eba4b5761e

You will need (if you want to process ALL your emails in one shot!) :

  1. Windows 7/8/10
  2. Outlook 2013 or 2016
  3. Access 2013 or 2016
  4. An Azure Subscription
  5. A data lake store and analytics account
  6. PowerBI Desktop or any other Visualization Tool you like (Tableau or simply Excel)

Step 1 : Link MS Access Tables to your Outlook folders as explained here

Step 2: Export from Access to csv files your emails.

Step 3: Upload those files to your data lake store.

Step 4: Process the fields containing text data with the U-SQL cognitive extensions and derive sentiment and key phrases of each email

Step 5: With PowerBI Desktop you can access the output data sitting into the data lake store as described here

Step 6: Find the senders with highest average sentiment and the ones with the lowest one 🙂 .

job-well-done-clipart-1

If you are worried about leaving your emails in the cloud, after obtaining the sentiment and key phrases , you can download this latest output and remove all the data from data lake store , using this (local) file as input for power bi desktop.

In addition to this I would also suggest to perform a one way hash of the sender email address and upload to the data lake store account the emails with this hashed field instead of the real sender.

wekk4

Once you have the data lake analytics job results you can download them and join locally in Access to associate again each email to the original sender.

 

How to generate Terabytes of IOT data with Azure Data Lake Analytics

Hi everyone, during one of my projects I’ve been asked the following question:

I’m actually storing my IOT sensor’s data in Azure Data Lake for analysis and feature engineering , but currently I still have very few devices, so not a big amount of data and I’m not able to understand how much fast will be my queries and my transformations when with much more devices and months/years of sensor data my data lake will reach do over several terabytes.

Well in that case let’s generate quickly those terabytes of data using U-SQL capabilities!

Let’s assume that our data resembles the following:

deviceId, timestamp, sensorValue, …….

so we have for each IOT device a unique identifier called deviceId and let’s assume is a composition of numbers and letters, we have a timestamp indicating the time at millisecond precision, where the IOT event was generated and finally we have the values of the sensors in that moment (temperature, speed, etc..).

The idea is the following give a real deviceId, generate N “synthetic deviceIds” that have all the same data of the original device . So if we have , for example , 5 real deviceId each with 100.000.000 records (500.000.000 records in total), if we generate 1000 synthetic deviceIds for each real deviceId  we will have 1000x5x100.000.000 additional records so 500.000.000.000 records.

But we can expand the amount of synthetic data even more playing with time, for example, if our real data has events only for  2017, we can duplicate the entire dataset for all the years starting from 2006 to 2016 and have 5.000.000.000.000 records.

Here some sample C# code that generates the synthetic deviceIds:

note the GetArraysOfSyntheticDevices function that will be executed into the U-SQL script.

Before using it we have to register the assembly into our DataLake account and database (in my case the master one):

DROP ASSEMBLY master.[Microsoft.DataGenUtils];
CREATE ASSEMBLY master.[Microsoft.DataGenUtils] FROM @”location of dll”;

Now we can read the original IOT data and create the additional data:

REFERENCE ASSEMBLY master.[Microsoft.DataGenUtils];

@t0 =

EXTRACT
deviceid string,
timeofevent DateTime,
sensorvalue float
FROM “2017/IOTRealData.csv”
USING Extractors.Csv();

//Let’s have the distinct list of all the real DeviceIds
@t1 =SELECT DISTINCT
deviceid AS deviceid
FROM @t0;

//Let’s calculate for each deviceId an array of 1000 synthetic devices

@t2 =
SELECT deviceid,
Microsoft.DataGenUtils.SyntheticData.GetArrayOfSynteticDevices(deviceid, 1000) AS SyntheticDevices
FROM @t1;

//Let’s assign to each array of synthetic devices the same data of the corresponding real device

@t3 = SELECT a.SyntheticDevices,
de.timeofevent,
de.sensorvalue
FROM @t0 AS de INNER JOIN @t2 AS a ON de.deviceid== a.deviceid;

//Let’s use the explode function to expand the array to records

@t1Exploded =
SELECT
emp AS deviceid,
de.timeofevent,
de.sensorvalue
FROM @t3 AS de
CROSS APPLY
EXPLODE(de.SyntheticDevices) AS dp(emp);

//Now we can write the expanded data

OUTPUT @t1Exploded
TO “SyntethicData/2017/expanded_{*}.csv”
USING Outputters.Csv();

Once you have the expanded data for the entire 2017 you can just use c# DateTime functions that add Years, Months or days to a specific date, applied that to timeofevent column and write the new data in a new folder (for example SyntethicData/2016, SyntethicData/2015 etc…).

 

My Top 2 Microsoft Build 2017 Sessions

Let’s start with Number 1 this is the Visionary Cloud that is arriving , compute nodes combined with FPGA neurons that act as Hardware Micro services communicating and changing their internal code directly attached to the Azure Network , like a global neural network. Do you want to know more? Click here and check directly with the new video index AI the content of this presentation jumping directly on the portions of the video that you like, searching words, concepts and people appearing in the video.

We can then look here at Number 2 (go to 1:01:54) :

Matt Velloso teaching to a Robot (Zenbo) how to recognize the images the robot sees using Microsoft Bot Framework and the new Custom Image Recognition Service.

Do you want to explore more?

Go here at channel9 and have fun exploring all the massive updates that has been released!

Pyspark safely on Data Lake Store and Azure Storage Blob

Hi , I’m working on several projects where is required to access cloud storages (in this case Azure Data Lake Store and Azure Blob Storage) from pyspark running on Jupyter avoiding that all the Jupyter users are accessing these storages with the same credentials stored inside the core-site.xml configuration file of the Spark Cluster.

microsoft-azure-blob-e1483079067730

134d5808-d70b-4db2-97f8-1061211cd82f

I started my investigations looking at the SparkSession that comes with Spark 2.0, especially to commands like this spark.conf.set(“spark.sql.shuffle.partitions”, 6), but I discovered that this command are not working at Hadoop settings level, but they are limited to the spark runtime parameters.

I moved then my attention to SparkContext and in particular to HadoopConfiguration that seemed promising but it is missing into the pyspark implementation…

Finally I was able to find this excellent Stackoverflow post that points out how to leverage the HadoopConfiguration functionality from pyspark.

So in a nutshell you can have the core-site.xml defined as follows:

xml

So as you can see we do not store any credential here.

Let’s see how to access Azure Storage Blob Container with a shared access signature that can be created specifically to access a specific Container (imagine it like a folder) and set almost a fine grained security model on the  Azure Storage account without sharing the Azure Blob Storage Access Keys.

If you love python here some code that an admin can use to generate SAS signatures quickly that last for 24 hours:

from azure.storage.blob import (
BlockBlobService,
ContainerPermissions
)

from datetime import datetime, timedelta

account_name ="ACCOUNT_NAME"
account_key ="ADMIN_KEY"
CONTAINER_NAME="CONTAINER_NAME"

block_blob_service = BlockBlobService(account_name=account_name, account_key=account_key)

sas_url = block_blob_service.generate_container_shared_access_signature(CONTAINER_NAME,ContainerPermissions.READ,datetime.utcnow() + timedelta(hours=24),)

print(sas_url)

You will obtain something like this:

sv=2015-04-05&st=2015-04-29T22%3A18%3A26Z&se=2015-04-30T02%3A23%3A26Z&sr=b&sp=rw&sip=168.1.5.60-168.1.5.70&spr=https&sig=Z%2FRHIX5Xcg0Mq2rqI3OlWTjEg2tYkboXr1P9ZUXDtkk%3D

You can refer to this link to understand the structure.

Ok now, once the azure storage admin provide us the signature, we can use this SAS signature to access directly the files on the Azure Storage Blob Container safely:

 
 sc._jsc.hadoopConfiguration().set("fs.azure.sas.PUT_YOUR_CONTAINER_NAME.PUT_YOUR_ACCOUNT_NAME.blob.core.windows.net", "PUT_YOUR_SIGNATURE")
 from pyspark.sql.types import *

# Load the data.We use the sample HVAC.csv file of HDInsight samples
 hvacText = sc.textFile("wasbs://PUT_YOUR_CONTAINER_NAME@PUT_YOUR_ACCOUNT_NAME.blob.core.windows.net/HVAC.csv")

# Create the schema
 hvacSchema = StructType([StructField("date", StringType(), False),StructField("time", StringType(), False),StructField("targettemp", IntegerType(), False),StructField("actualtemp", IntegerType(), False),StructField("buildingID", StringType(), False)])

# Parse the data in hvacText
 hvac = hvacText.map(lambda s: s.split(",")).filter(lambda s: s[0] != "Date").map(lambda s:(str(s[0]), str(s[1]), int(s[2]), int(s[3]), str(s[6]) ))

# Create a data frame
 hvacdf = sqlContext.createDataFrame(hvac,hvacSchema)

# Register the data fram as a table to run queries against
 hvacdf.registerTempTable("hvac")
 from pyspark.sql import HiveContext
 hive_context = HiveContext(sc)
 bank = hive_context.table("hvac")
 bank.show()

The same idea can be applied to data lake store. Assuming that you have your data lake credentials setup as described here , you can access data lake store safely in this way:

 
sc._jsc.hadoopConfiguration().set("dfs.adls.oauth2.refresh.url", "https://login.microsoftonline.com/PUT_YOUR_TENANT_ID/oauth2/token")
sc._jsc.hadoopConfiguration().set("dfs.adls.oauth2.client.id", "PUT_YOUR_CLIENT_ID")
sc._jsc.hadoopConfiguration().set("dfs.adls.oauth2.credential", "PUT_YOUR_SECRET")


  from pyspark.sql.types import *

# Load the data. The path below assumes Data Lake Store is default storage for the Spark cluster
  hvacText = sc.textFile("adl://YOURDATALAKEACCOUNT.azuredatalakestore.net/Samples/Data/HVAC.csv")

# Create the schema
  hvacSchema = StructType([StructField("date", StringType(), False),StructField("time", StringType(), False),StructField("targettemp", IntegerType(), False),StructField("actualtemp", IntegerType(), False),StructField("buildingID", StringType(), False)])

  # Parse the data in hvacText
  hvac = hvacText.map(lambda s: s.split(",")).filter(lambda s: s[0] != "Date").map(lambda s:(str(s[0]), str(s[1]), int(s[2]), int(s[3]), str(s[6]) ))

  # Create a data frame
  hvacdf = sqlContext.createDataFrame(hvac,hvacSchema)

  # Register the data fram as a table to run queries against
  hvacdf.registerTempTable("hvac")

    from pyspark.sql import HiveContext
hive_context = HiveContext(sc)
bank = hive_context.table("hvac")
bank.show()

Happy coding with pyspark and Azure!

Data scientists wanna have fun!

Hi everyone, yes I’m back!

This is time we are going to setup a Big Data playground on Azure that can be really useful for any python/pyspark data scientist .

Typically what you can have out of the box on Azure for this task it’s Spark HDInsight cluster (i.e. Hadoop on Azure in Platform as a Service mode) connected to Azure Blob Storage (where the data is stored)  running pyspark jupyter notebooks.

It’s a fully managed cluster that you can start in few clicks and gives you all the Big Data power you need to crunch billions of rows of data, this means that cluster nodes configuration, libraries, networking, etc.. everything is done automatically for you and you have just to think to solve your business problems without worry about IT tasks like “check if cluster is alive or check if cluster is ok, etc…”  , Microsoft will do this for you.

Now one key ask that data scientist have is : “freedom!” , in other words they want to install/update new libraries , try new open source packages but at the same time they also don’t want to manage “a cluster” as an IT department .

In order to satisfy these two requirements we need some extra pieces in our playground and one key component is the Azure Linux Data Science Virtual Machine.

The Linux Data Science Virtual Machine it’s the Swiss knife for all data science needs, here  you can have an idea of all the incredible tasks you can accomplish with this product .

In this case I’m really interested in these capabilities:

  • It’s a VM so data scientists can add/update all the libraries they need
  • Jupyter and Spark are already installed on it so data scientists can use it to play locally and experiment on small data before going “Chuck Norris mode” on HDInsight

But there is something missing here…., as a data scientist I would love to work in one unified environment accessing all my data and switch with a simple click from local to “cluster” mode without changing anything in my code or my configurations.

Uhmmm…. seems impossible, here some magic is needed !

Wait a minute , did you say “magic”? I think we have that kind of magic :-), it’s spark magic!

In fact we can use the local jupyter and spark environment by default and when we need the power of the cluster using spark magic when can , simply changing the kernel of the notebook,  run the same code on the cluster!

diagramIn order to complete the setup we need to do the following:

  1. Add to the Linux DS VM the possibility to connect , via local spark, to azure blob storage (adding libraries, conf files and settings)
  2. Add to the Linux DS VM spark magic (adding libraries, conf files and settings) to connect from local Jupyter notebook to the HDInsight cluster using Livy

Here the detailed instructions:

Step 1  to start using Azure blob from your Spark program (ensure you run these commands as root):

cd $SPARK_HOME/conf
cp spark-defaults.conf.template spark-defaults.conf
cat >> spark-defaults.conf <<EOF
spark.jars                 /dsvm/tools/spark/current/jars/azure-storage-4.4.0.jar,/dsvm/tools/spark/current/jars/hadoop-azure-2.7.3.jar
EOF

If you dont have a core-site.xml in $SPARK_HOME/conf directory run the following:

cat >> core-site.xml <<EOF
< ?xml version=”1.0″ encoding=”UTF-8″?>
< ?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
< configuration>
< property>
<name>fs.AbstractFileSystem.wasb.Impl</name>
<value>org.apache.hadoop.fs.azure.Wasb</value>
< /property>
< property>
<name>fs.azure.account.key.YOURSTORAGEACCOUNT.blob.core.windows.net</name>
<value>YOURSTORAGEACCOUNTKEY</value>
< /property>
< /configuration>
EOF

Else, just copy paste the two <property> sections above to your core-site.xml file. Replace the actual name of your Azure storage account and Storage account key.

Once you do these steps, you should be able to access the blob from your Spark program with the wasb://YourContainer@YOURSTORAGEACCOUNT.blob.core.windows.net/YourBlob URL in the read API.

Step 2 Enable local Juypiter notebook with remote spark execution on  HDInsight (Assuming that default python is 3.5 like is coming from Linux DS VM ):

sudo /anaconda/envs/py35/bin/pip install sparkmagic

cd /anaconda/envs/py35/lib/python3.5/site-packages

sudo /anaconda/envs/py35/bin/jupyter-kernelspec install sparkmagic/kernels/pyspark3kernel

sudo /anaconda/envs/py35/bin/jupyter-kernelspec install sparkmagic/kernels/sparkkernel

sudo /anaconda/envs/py35/bin/jupyter-kernelspec install sparkmagic/kernels/sparkrkernel

 

in your /home/{YourLinuxUsername}/ folder

  1. create a folder called .sparkmagic and create a file called config.json
  2. Write in the file the configuration values of HDInsight (livy endpoints and auth) as described here :

At this point going back to Jupyter should allow you run your notebook against the HDInsight cluster using PySpark3, Spark, SparkR kernels and you can switch from local Kernel to remote kernel execution with one click!

Of course some security features have to improved (passwords in clear text!), but the community is already working on this (see here support for base64 encoding) and ,of course , you can get the spark magic code from git, add the encryption support you need and bring back this to the community!

Have fun with Spark and Spark Magic!

UPDATE : here instructions on how to connect also to Azure Data Lake Store!

  1. Download this package and just extract these two libraries: azure-data-lake-store-sdk-2.0.11.jar , hadoop-azure-datalake-3.0.0-alpha2.jar
  2. Copy these libraries here “/home/{YourLinuxUsername}/Desktop/DSVM tools/spark/spark-2.0.2/jars/”
  3. Add their path to the list of library paths inside spark-defaults.conf as we have done before
  4. Go here and after you have created your AAD Application note down : Client ID, Client Secret and Tenant ID
  5. Add the following properties to your core-site.xml replacing the values with the ones you have obtained from the previous step:<property><name>dfs.adls.oauth2.access.token.provider.type</name><value>ClientCredential</value></property><property><name>dfs.adls.oauth2.refresh.url</name><value> https://login.microsoftonline.com/{YOUR TENANT ID}/oauth2/token</value></property><property><name>dfs.adls.oauth2.client.id</name><value>{YOUR CLIENT ID}</value></property>

    <property><name>dfs.adls.oauth2.credential</name><value>{YOUR SECRET ID}</value></property>

    <property><name>fs.adl.impl</name><value>org.apache.hadoop.fs.adl.AdlFileSystem</value></property>

    <property><name>fs.AbstractFileSystem.adl.impl</name><value>org.apache.hadoop.fs.adl.Adl</value></property>

Integrating Azure API App protected by Azure Active Directory with Salesforce using Oauth

This time I had to face a new integration challenge: on salesforce service cloud , in order to offer a personalized service to customers requesting assistance, I added a call to an azure app that exposes all the information the  company has about this customer on all the touch points (web, mobile, etc…). Apex coding is quite straightforward when dealing with simple http calls and interchange of Json objects, it becames more tricky when you have to deal with authentication.

In my specific case the token based authentication I have to put in place is composed by the following steps:

  1. Identify the url that accept our authentication request and returns the authentication token
  2. Compose the authentication request with all the needed parameters that define the requester identity and the target audience
  3. Retrieve the token and model all the following  requests to the target app inserting this token in the header
  4. Nice to have : cache the token in order to reuse it for multiple apex calls and refresh it before it expires or on request.

Basically all the info we need is contained in this single Microsoft page.

So before even touching a single line of code we have to register the calling and called applications in azure directory (this will give to both an “identity” ).

This step should be already done for the azure API app when you protect it with AAD using the portal (write down the client Id of the AAD registered app), while for the caller (salesforce) just register a simple app you want on the AAD.

When you will do this step write down the client id and the client secret that the portal will give you.

Now you need your tenantid , specific for your azure subscription. There are multiple ways of retrieving this parameter as stated here , for me worked the powershell option.

Once you have these 4 parameters you can be build a POST request in this way:

Endpoint: https://login.microsoftonline.com/<tenant id>/oauth2/token

Header:

Content-Type: application/x-www-form-urlencoded

Request body:

grant_type=client_credentials&client_id=<client  id of salesforce app>&client_secret=<client secret of the salesforce app>&resource=<client id of the azure API app>

If everything goes as expected you will receive this JSON response:

{

“access_token”:”eyJhbGciOiJSUzI1NiIsIng1dCI6IjdkRC1nZWNOZ1gxWmY3R0xrT3ZwT0IyZGNWQSIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJodHRwczovL3NlcnZpY2UuY29udG9zby5jb20vIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvN2ZlODE0NDctZGE1Ny00Mzg1LWJlY2ItNmRlNTdmMjE0NzdlLyIsImlhdCI6MTM4ODQ0ODI2NywibmJmIjoxMzg4NDQ4MjY3LCJleHAiOjEzODg0NTIxNjcsInZlciI6IjEuMCIsInRpZCI6IjdmZTgxNDQ3LWRhNTctNDM4NS1iZWNiLTZkZTU3ZjIxNDc3ZSIsIm9pZCI6ImE5OTE5MTYyLTkyMTctNDlkYS1hZTIyLWYxMTM3YzI1Y2RlYSIsInN1YiI6ImE5OTE5MTYyLTkyMTctNDlkYS1hZTIyLWYxMTM3YzI1Y2RlYSIsImlkcCI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0LzdmZTgxNDQ3LWRhNTctNDM4NS1iZWNiLTZkZTU3ZjIxNDc3ZS8iLCJhcHBpZCI6ImQxN2QxNWJjLWM1NzYtNDFlNS05MjdmLWRiNWYzMGRkNThmMSIsImFwcGlkYWNyIjoiMSJ9.aqtfJ7G37CpKV901Vm9sGiQhde0WMg6luYJR4wuNR2ffaQsVPPpKirM5rbc6o5CmW1OtmaAIdwDcL6i9ZT9ooIIicSRrjCYMYWHX08ip-tj-uWUihGztI02xKdWiycItpWiHxapQm0a8Ti1CWRjJghORC1B1-fah_yWx6Cjuf4QE8xJcu-ZHX0pVZNPX22PHYV5Km-vPTq2HtIqdboKyZy3Y4y3geOrRIFElZYoqjqSv5q9Jgtj5ERsNQIjefpyxW3EwPtFqMcDm4ebiAEpoEWRN4QYOMxnC9OUBeG9oLA0lTfmhgHLAtvJogJcYFzwngTsVo6HznsvPWy7UP3MINA”,

“token_type”:”Bearer”,

“expires_in”:”3599″,

“expires_on”:”1388452167″,

“resource”:”<client id of the azure API app>”

}

Now your are finally ready to call the azure API app endpoint, in fact the only added thing you have to do is to add to the http request is an header with the following contents :

Authorization: Bearer <access_token coming from the previous request> 

This should be sufficient to complete our scenario (btw do not forget to add https://login.microsoftonline.com and https://<UrlOfYourApiApp&gt; to the authorized remote urls list of your salesforce setup).

Using the expires data of the token you can figure out how long it will last (usually 24h) and you can setup your cache strategy.

Happy integration then!

Ps:

Here some apex snipplets that implement what explained.

public with sharing class AADAuthentication {
private static String TokenUrl='https://login.microsoftonline.com/putyourtenantid/oauth2/token';
private static String grant_type='client_credentials';
private static String client_id='putyourSdfcAppClientId';
private static String client_secret='putyourSdfcAppClientSecret';
private static String resource='putyourAzureApiAppClientId';
private static String JSonTestUrl='putyourAzureApiUrlyouwanttoaccess';

public static String AuthAndAccess()
{
String responseText='';
String accessToken=getAuthToken();
HttpRequest req = new HttpRequest();
req.setMethod('GET');
req.setHeader('Authorization', 'Bearer '+accessToken);
req.setEndpoint(JSonTestUrl);
//req.setBody(accessToken);
Http http = new Http();
try
{
HTTPResponse res = http.send(req);
responseText=res.getBody();
System.debug('STATUS:'+res.getStatus());
System.debug('STATUS_CODE:'+res.getStatusCode());
System.debug('COMPLETE RESPONSE: '+responseText);

} catch(System.CalloutException e) {
System.debug(e.getMessage());
}
return responseText;

}

public static String getAuthToken()
{
String responseText='';
HttpRequest req = new HttpRequest();
req.setMethod('POST');
req.setHeader('Content-Type','application/x-www-form-urlencoded');
String requestString='grant_type='+grant_type+'&client_id='+client_id+'&client_secret='+client_secret+'&resource='+resource;
req.setBody(requestString);
System.debug('requestString:'+requestString);
req.setEndpoint(TokenUrl);
Http http = new Http();
try
{
HTTPResponse res = http.send(req);
responseText=res.getBody();
System.debug('STATUS:'+res.getStatus());
System.debug('STATUS_CODE:'+res.getStatusCode());
System.debug('COMPLETE RESPONSE: '+responseText);

} catch(System.CalloutException e) {
System.debug(e.getMessage());
}

JSONParser parser = JSON.createParser(responseText);
String accessToken='';
while (parser.nextToken() != null) {
if ((parser.getCurrentToken() == JSONToken.FIELD_NAME) &&
(parser.getText() == 'access_token')) {
parser.nextToken();
accessToken = parser.getText();
break;
}
}
System.debug('accessToken: '+accessToken);
return accessToken;

}

}