Digital Marketing in practice (part 3)

In the first and second part of this series we enumerated already several systems that we need to have in place to actually drive our digital shop of toys:

E-commerce (this alone it is a universe of components like payment gateways, web log analytics, social integrations, rating & reviews , coupons/promotions etc …) , supply chain, crm , marketing platform and last but not least the DMP with all the pieces and integrations .

How to manage all those platforms and how to integrate them?

How to perform something “simple” like define a campaign with promotion and special look & feel and have this promotion displayed only to the special customers irrespective if they browse the e-commerce site or they are on the Facebook page or they use the iPhone/Android app? And what if we want this to happen only x times to each specific customer and after that do not appear any more?

What if we want to do this for N (let’s say N=30 or 300) customer groups?

If we have 4-5 or more front ends (e-commerce, app Iphone, app Android, social pages, marketing emails, etc…) do we have to perform 5×300=1500 ui changes (1 for each combination of ui and customer group) ?

How do we recognize the same customer across different front ends in order to offer him the right promotions (multi channel)?

And if we add also retail stores (our own retail stores) into the mix (omni-channel) ?

What is the cost if we do this in the wrong way?

Finally one of the most desired features: Can we have a customer group journey builder and on each step of the journey decide what will happen on all the touch points (front ends) for those customers?


Those questions are usually the bread and butter of all the digital marketing and sales departments trying to serve their clients with the best of breed available technologies but at a reasonable cost , not only in terms of licensing and customization but mainly in terms of maintenance and integration costs.

In fact a “build” approach for those scenarios is simply impossible not only from a budget perspective but principally from a time to market perspective.

In order to proceed and propose some feasible approaches we need first try to figure out what are our key enablers that can help to simplify our landscape .

A first consideration is the following:

If we want to provide the same “service” to our customers on different touch points, we need to recognize customers in those front ends in the very same way.

One way to achieve this is to assign and reuse for each customer a unique identity that can identify him across all channels.

Ideally we need an identity solution, with hashing, encryption, cyber threat detection, multi factor authentication, social logins, etc…

More importantly we need an identity provider that is “plug-gable” in our e-commerce, apps and other touch points.

Assuming that our identity provider is in place (here a list of possible vendors) we can already offer very nice features and obtain very important benefits:

1. We do not manage passwords, we offload almost completely the identity security and compliance to our identity provider

2. We can offer social or sms login so people do not have to remember new passwords but they can re-use identities that they already have.

3. We can leverage identity provider advanced features like: progressive profiling (ask info to our customers at the right time and not all in once at registration time), identity sso across multiple touch points (if you are already logged in website A you are also automatically logged in website B) , conditional multi factor authentication (mfa), mobile app with qr code scan for mfa instead of sms, etc…

4. Identity forms (registration, password reset, login/multi factor login, etc) managed centrally once for all in the identity provider. Defined once, used in all the touch points.

5. Custom schema: in addition to the usual username/userid etc… modern identity providers give you also a flexible schema for customer identity data that you can leverage for multiple purposes. We will see the value of this soon..

In the next part we will look in the architectural benefits that this choice can bring and how we can leverage this to deliver the previously discussed features.

Annunci

Send Emails with Adobe Campaign Marketing Cloud (Neolane)

Hi this time instead of downloading data from Neolane or updating recipients in it , we want to use Neolane as email cannon leveraging its powerful template engine to manage the email look & feel and deciding on the fly ,using Neolane API, the targets of our mass email campaign.

So the use case is the following : define in Neolane an email template with some fields mapping and leverage Neolane API to send email using this template but defining the email recipients externally and also the contents of the mapped fields .

According to the official Adobe documentation this can be done using the Neolane Business Oriented Apis (we looked into the Data Oriented Apis in our previous articles) as specified here:

https://docs.campaign.adobe.com/doc/AC6.1/en/CFG_API_Business_oriented_APIs.html#SubmitDelivery__nms-delivery_

Unfortunately the documentation is not really clear/complete and I had really to dig inside adobe logs, error codes and soap responses to have this working properly, and here is some sample code that can help you.

The code is made using the sample provided inside the Adobe documentation (external file mapping with data coming from the CDATA section inside the delivery xml tag structure).

Here the c# code:




 string sessionToken = "Look at other Neolane Code samples on how retrieve the session token";
 string securityToken = "Look at the other Neolane Code samples on how retrieve the security token";

string scenario ="Write here the internal name of the delivery template";

 HttpWebRequest reqData = (HttpWebRequest)WebRequest.Create(adobeApiUrl);

 reqData.ContentType = "text/xml; charset=utf-8";
 reqData.Headers.Add("SOAPAction", "nms:delivery#SubmitDelivery");
 reqData.Headers.Add("X-Security-Token", securityToken);
 reqData.Headers.Add("cookie", "__sessiontoken=" + sessionToken);
 reqData.Method = "POST";

 


 string strWriteHeader = "<?xml version='1.0' encoding='ISO-8859-1'?>" +
 "<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:urn=\"urn:nms:delivery\">" +
 "<soapenv:Header/>" + 
 " <soapenv:Body>"+
 "<urn:SubmitDelivery>" +
 "<urn:sessiontoken>" + sessionToken + "</urn:sessiontoken>" +
 " <urn:strScenarioName>" +scenario+ "</urn:strScenarioName>"+
 "<urn:elemContent>";

 string strWriteRecipientBody = "<delivery> " +
   "<targets fromExternalSource=\"true\"> " +
           "<externalSource><![CDATA[MsgId|ClientId|Title|Name|FirstName|Mobile|Email|Market_segment|Product_affinity1|Product_affinity2|Product_affinity3|Product_affinity4|Support_Number|Amount|Threshold " + 
System.Environment.NewLine+ 
"1|000001234|M.|Phulpin|Hervé|0650201020|herve.phulpin@adobe.com|1|A1|A2|A3|A4|E12|120000|100000]]></externalSource>" +
          "</targets> " +
"</delivery>";
 string strWriteFooter = " </urn:elemContent>" +
 "</urn:SubmitDelivery>" +
 "</soapenv:Body>" +
 "</soapenv:Envelope>";

 string bodyData = strWriteHeader + strWriteRecipientBody + strWriteFooter;

 byte[] byteArrayData = Encoding.UTF8.GetBytes(bodyData);

 // Set the ContentLength property of the WebRequest.
 reqData.ContentLength = byteArrayData.Length;
 // Get the request stream.
 Stream dataStreamInputData = reqData.GetRequestStream();
 // Write the data to the request stream.
 dataStreamInputData.Write(byteArrayData, 0, byteArrayData.Length);
 // Close the Stream object.
 dataStreamInputData.Close();

 var responseData = reqData.GetResponse();

 Stream dataStreamData = responseData.GetResponseStream();
 // Open the stream using a StreamReader for easy access.
 StreamReader readerData = new StreamReader(dataStreamData);
 // Read the content.
 string responseFromServerData = readerData.ReadToEnd();

 // Clean up the streams and the response.
 readerData.Close();
 responseData.Close();

return responseFromServerData;

Integrating Azure API App protected by Azure Active Directory with Salesforce using Oauth

This time I had to face a new integration challenge: on salesforce service cloud , in order to offer a personalized service to customers requesting assistance, I added a call to an azure app that exposes all the information the  company has about this customer on all the touch points (web, mobile, etc…). Apex coding is quite straightforward when dealing with simple http calls and interchange of Json objects, it becames more tricky when you have to deal with authentication.

In my specific case the token based authentication I have to put in place is composed by the following steps:

  1. Identify the url that accept our authentication request and returns the authentication token
  2. Compose the authentication request with all the needed parameters that define the requester identity and the target audience
  3. Retrieve the token and model all the following  requests to the target app inserting this token in the header
  4. Nice to have : cache the token in order to reuse it for multiple apex calls and refresh it before it expires or on request.

Basically all the info we need is contained in this single Microsoft page.

So before even touching a single line of code we have to register the calling and called applications in azure directory (this will give to both an “identity” ).

This step should be already done for the azure API app when you protect it with AAD using the portal (write down the client Id of the AAD registered app), while for the caller (salesforce) just register a simple app you want on the AAD.

When you will do this step write down the client id and the client secret that the portal will give you.

Now you need your tenantid , specific for your azure subscription. There are multiple ways of retrieving this parameter as stated here , for me worked the powershell option.

Once you have these 4 parameters you can be build a POST request in this way:

Endpoint: https://login.microsoftonline.com/<tenant id>/oauth2/token

Header:

Content-Type: application/x-www-form-urlencoded

Request body:

grant_type=client_credentials&client_id=<client  id of salesforce app>&client_secret=<client secret of the salesforce app>&resource=<client id of the azure API app>

If everything goes as expected you will receive this JSON response:

{

“access_token”:”eyJhbGciOiJSUzI1NiIsIng1dCI6IjdkRC1nZWNOZ1gxWmY3R0xrT3ZwT0IyZGNWQSIsInR5cCI6IkpXVCJ9.eyJhdWQiOiJodHRwczovL3NlcnZpY2UuY29udG9zby5jb20vIiwiaXNzIjoiaHR0cHM6Ly9zdHMud2luZG93cy5uZXQvN2ZlODE0NDctZGE1Ny00Mzg1LWJlY2ItNmRlNTdmMjE0NzdlLyIsImlhdCI6MTM4ODQ0ODI2NywibmJmIjoxMzg4NDQ4MjY3LCJleHAiOjEzODg0NTIxNjcsInZlciI6IjEuMCIsInRpZCI6IjdmZTgxNDQ3LWRhNTctNDM4NS1iZWNiLTZkZTU3ZjIxNDc3ZSIsIm9pZCI6ImE5OTE5MTYyLTkyMTctNDlkYS1hZTIyLWYxMTM3YzI1Y2RlYSIsInN1YiI6ImE5OTE5MTYyLTkyMTctNDlkYS1hZTIyLWYxMTM3YzI1Y2RlYSIsImlkcCI6Imh0dHBzOi8vc3RzLndpbmRvd3MubmV0LzdmZTgxNDQ3LWRhNTctNDM4NS1iZWNiLTZkZTU3ZjIxNDc3ZS8iLCJhcHBpZCI6ImQxN2QxNWJjLWM1NzYtNDFlNS05MjdmLWRiNWYzMGRkNThmMSIsImFwcGlkYWNyIjoiMSJ9.aqtfJ7G37CpKV901Vm9sGiQhde0WMg6luYJR4wuNR2ffaQsVPPpKirM5rbc6o5CmW1OtmaAIdwDcL6i9ZT9ooIIicSRrjCYMYWHX08ip-tj-uWUihGztI02xKdWiycItpWiHxapQm0a8Ti1CWRjJghORC1B1-fah_yWx6Cjuf4QE8xJcu-ZHX0pVZNPX22PHYV5Km-vPTq2HtIqdboKyZy3Y4y3geOrRIFElZYoqjqSv5q9Jgtj5ERsNQIjefpyxW3EwPtFqMcDm4ebiAEpoEWRN4QYOMxnC9OUBeG9oLA0lTfmhgHLAtvJogJcYFzwngTsVo6HznsvPWy7UP3MINA”,

“token_type”:”Bearer”,

“expires_in”:”3599″,

“expires_on”:”1388452167″,

“resource”:”<client id of the azure API app>”

}

Now your are finally ready to call the azure API app endpoint, in fact the only added thing you have to do is to add to the http request is an header with the following contents :

Authorization: Bearer <access_token coming from the previous request> 

This should be sufficient to complete our scenario (btw do not forget to add https://login.microsoftonline.com and https://<UrlOfYourApiApp&gt; to the authorized remote urls list of your salesforce setup).

Using the expires data of the token you can figure out how long it will last (usually 24h) and you can setup your cache strategy.

Happy integration then!

Ps:

Here some apex snipplets that implement what explained.

public with sharing class AADAuthentication {
private static String TokenUrl='https://login.microsoftonline.com/putyourtenantid/oauth2/token';
private static String grant_type='client_credentials';
private static String client_id='putyourSdfcAppClientId';
private static String client_secret='putyourSdfcAppClientSecret';
private static String resource='putyourAzureApiAppClientId';
private static String JSonTestUrl='putyourAzureApiUrlyouwanttoaccess';

public static String AuthAndAccess()
{
String responseText='';
String accessToken=getAuthToken();
HttpRequest req = new HttpRequest();
req.setMethod('GET');
req.setHeader('Authorization', 'Bearer '+accessToken);
req.setEndpoint(JSonTestUrl);
//req.setBody(accessToken);
Http http = new Http();
try
{
HTTPResponse res = http.send(req);
responseText=res.getBody();
System.debug('STATUS:'+res.getStatus());
System.debug('STATUS_CODE:'+res.getStatusCode());
System.debug('COMPLETE RESPONSE: '+responseText);

} catch(System.CalloutException e) {
System.debug(e.getMessage());
}
return responseText;

}

public static String getAuthToken()
{
String responseText='';
HttpRequest req = new HttpRequest();
req.setMethod('POST');
req.setHeader('Content-Type','application/x-www-form-urlencoded');
String requestString='grant_type='+grant_type+'&client_id='+client_id+'&client_secret='+client_secret+'&resource='+resource;
req.setBody(requestString);
System.debug('requestString:'+requestString);
req.setEndpoint(TokenUrl);
Http http = new Http();
try
{
HTTPResponse res = http.send(req);
responseText=res.getBody();
System.debug('STATUS:'+res.getStatus());
System.debug('STATUS_CODE:'+res.getStatusCode());
System.debug('COMPLETE RESPONSE: '+responseText);

} catch(System.CalloutException e) {
System.debug(e.getMessage());
}

JSONParser parser = JSON.createParser(responseText);
String accessToken='';
while (parser.nextToken() != null) {
if ((parser.getCurrentToken() == JSONToken.FIELD_NAME) &&
(parser.getText() == 'access_token')) {
parser.nextToken();
accessToken = parser.getText();
break;
}
}
System.debug('accessToken: '+accessToken);
return accessToken;

}

}

Integration with Adobe Campaign Marketing (aka Neolane) Part II

Hi in the previous post we saw how to read information from Adobe Campaign Marketing.

This time I want to show you how to “write” to it, in particular how to add or modify recipients . This is , I believe, something that you want to do regularly to have in sync , for example, your users preferences on your sites and their current status on your campaign database . In fact a user that removes from his profile on a site the consensus to receive a specific newsletter imagines that automatically , from that moment, he will never be disturbed again. If you do not sync this asap, you have the risk to contact someone that does not want to be contacted . On the other side, if a new user registers on your site you want asap to have in your campaign tool to target him .

Here the c# code:


string adobeApiUrl = ConfigurationManager.AppSettings["adobeApiUrl"];
//Here for testing purpouses username and password are simply read by conf settings but you should acquire it in a secure way!
string adobeUser = ConfigurationManager.AppSettings["adobeUser"];
string adobePass = ConfigurationManager.AppSettings["adobePass"];
//We need to write recipients but they stay inside a folder so we have to know in advance the folder id or name
string adobeFolderId = ConfigurationManager.AppSettings["adobeFolderId"];
//Create the web request to the soaprouter page
HttpWebRequest req = (HttpWebRequest)WebRequest.Create(adobeApiUrl);
req.Method = "POST";
req.ContentType = "text/xml; charset=utf-8";
//Add to the headers the requested Service (session) that we want to call
req.Headers.Add("SOAPAction", "xtk:session#Logon");

string userName = adobeUser;
string pass = adobePass;
//We craft the soap envelope creating a session Logon reequest
string body = "<soapenv:Envelope xmlns:soapenv=\"http://schemas.xmlsoap.org/soap/envelope/\" xmlns:urn=\"urn:xtk:session\">" +
"<soapenv:Header/><soapenv:Body><urn:Logon>" +
"<urn:sessiontoken/>" +
"<urn:strLogin>" + userName + "</urn:strLogin>" +
"<urn:strPassword>" + pass + "</urn:strPassword>" +
"<urn:elemParameters/>" +
"</urn:Logon></soapenv:Body></soapenv:Envelope>";
//We write the body to a byteArray to be passed with the Request Stream
byte[] byteArray = Encoding.UTF8.GetBytes(body);

// Set the ContentLength property of the WebRequest.
req.ContentLength = byteArray.Length;
// Get the request stream.
Stream dataStreamInput = req.GetRequestStream();
// Write the data to the request stream.
dataStreamInput.Write(byteArray, 0, byteArray.Length);
// Close the Stream object.
dataStreamInput.Close();

var responseAdobe = req.GetResponse();

Stream dataStream = responseAdobe.GetResponseStream();
// Open the stream using a StreamReader for easy access.
StreamReader reader = new StreamReader(dataStream);
// Read the content.
string responseFromServer = reader.ReadToEnd();
// Display the content.
// Clean up the streams and the response.
reader.Close();
responseAdobe.Close();
//Manually parsing the response with an XMLDoc
System.Xml.XmlDocument xResponse = new XmlDocument();
xResponse.LoadXml(responseFromServer);
// We parse manually the response. This is again for testing purpouses
XmlNode respx = xResponse.DocumentElement.FirstChild.FirstChild;

string sessionToken = respx.FirstChild.InnerText;
string securityToken = respx.LastChild.InnerText;

// We have done the login now we can actually do a query on Neolane
HttpWebRequest reqData = (HttpWebRequest)WebRequest.Create(adobeApiUrl);
reqData.ContentType = "text/xml; charset=utf-8";
//Add to the headers the requested Service (persist) that we want to call
reqData.Headers.Add("SOAPAction", "xtk:persist#Write");
reqData.Headers.Add("X-Security-Token", securityToken);
reqData.Headers.Add("cookie", "__sessiontoken=" + sessionToken);
reqData.Method = "POST";
//We craft the soap header also here session token seems to be needed
string strWriteHeader = "<?xml version='1.0' encoding='ISO-8859-1'?>" +
"<soapenv:Envelope xmlns:soapenv='http://schemas.xmlsoap.org/soap/envelope/' xmlns:urn='urn:xtk:session'>" +
"<soapenv:Header/>" +
"<soapenv:Body>" +
"<urn:Write>" +
"<urn:sessiontoken>" + sessionToken + "</urn:sessiontoken>" +
"<urn:domDoc>";

string strWriteRecipientBody = string.Empty;
string strWriteFooter = "</urn:domDoc>" +
"</urn:Write>" +
"</soapenv:Body>" +
"</soapenv:Envelope>";
//Here I loop inside a list of objects that represent the adobe recipient I want to write
// operation is insertOrUpdate and the key that will check if it is an insert or an update is the email in my case.
// you can pick the one that you think is good

foreach (AdobeRecipient recipient in updatesOnAdobe)
{
strWriteRecipientBody +=
"<recipient "
+ "_operation='insertOrUpdate' "
+ "_key='@email' "
+ "xtkschema='nms:recipient' "
+ "account='" + recipient.account + "' "
+ "lastName='" + recipient.lastName + "' "
+ "firstName='" + recipient.firstName + "' "
+ "email='" + recipient.email + "' "
+ "origin='" + recipient.origin + "' "
+ "company='" + recipient.company + "'>"
+ "<folder id='" + recipient.folderId + "'/> "
+ "</recipient> ";

}
//Full String ready to be passed
string bodyData = strWriteHeader + strWriteRecipientBody + strWriteFooter;

byte[] byteArrayData = Encoding.UTF8.GetBytes(bodyData);

// Set the ContentLength property of the WebRequest.
reqData.ContentLength = byteArrayData.Length;
// Get the request stream.
Stream dataStreamInputData = reqData.GetRequestStream();
// Write the data to the request stream.
dataStreamInputData.Write(byteArrayData, 0, byteArrayData.Length);
// Close the Stream object.
dataStreamInputData.Close();

var responseData = reqData.GetResponse();

Stream dataStreamData = responseData.GetResponseStream();
// Open the stream using a StreamReader for easy access.
StreamReader readerData = new StreamReader(dataStreamData);
// Read the content.
string responseFromServerData = readerData.ReadToEnd();
// Here we should receive an OK from Neolane
// Clean up the streams and the response.
readerData.Close();
responseData.Close();

return responseFromServerData;

Tips on cloud solutions integration with Azure Logic Apps

The cloud paradigm is a well established reality: old CRM systems are now replaced by salesforce  solutions, HR and finance systems by workday , old exchange servers and SharePoint intranets by office 365, without mentioning entire datacenters migrated to Amazon. At the same time all the efforts that on premise where done to have all these systems to communicate together (the glorious days  of ESB solutions with TIBCO, Websphere, Bea Logic, BizTalk, etc..) seems kind of lost in the new cloud world. Why?

Well each cloud vendor (salesforce first I think) tries to position his own AppStore (AppExchange) with pre-built connectors or apps that quickly allow to companies to have integrations with other cloud platforms in a way that is not only cost effective but most of all supported by app vendor and not by a system integrator (so paid by the license).

Well this in theory should work , in practice we see a lot of good will from niche players on these apps but no or little commitment from big vendors.

Luckily however the best cloud solutions already provide rich and secure APIs to enable integration , it’s only a matter of connecting the dots and here several “integration” cloud vendors are already positioning themselves : Informatica Cloud, Dell, SnapLogic,MuleSoft,etc… ,the Gartner quadrant for Integration platform as a service (iPaaS) represents well the situation.

But while Gartner produced the report on March 2015 , Microsoft released a new kind of app on the azure platform that is called Azure Logic App.

What is the strength of this “service” compared to the others? Well in my opinion is that lies on a “de facto” proven and standard platform that can scale as much as you want and it also gives you the possibility of writing in your own connectors as Azure API apps , finally you can create your integration workflows right in the browser “no client attached!”. Of course it has so many other benefits but for me these three are so compelling that I decided to give it a try and start developing some POCs around it.

What you can do with them? Basically you can connect any system with any other system right from your browser!

Do you want to twit something if a document is uploaded on your dropbox? You can do it!

You want to define a complex workflow that start from a message on the service bus , calls a BAPI on SAP and ends inside a oracle stored procedures? It’s right there!

There is a huge list of connectors that you can use , and each of them can be combined with others in so many interesting ways!

Connectors have actions and triggers! Triggers , as the word says, are useful to react to an event (a new tweet with a specific word , a new lead on salesforce, etc..) and they can be used in a push or pull fashion (I’m interested in this event so the connector will notify me when this happens or I’m interested in this data and I will periodically call the connector to check if there is new data).

Actions are simply methods that can be executed on the connector (send an email, do a query, write a post on facebook,etc…).

An azure logic app is a workflow where you combine all these connectors using triggers and actions to achieve your goals.

How they communicate each other? I mean how do I refer inside a connector B that is linked to A to perform the action using A data? It is super simple! When you link two connectors you will see inside the target one on every action that requires data pick lists where you can easily pick the source data! This can happen because each connector automatically describes its API schema using swagger (this really rocks!).

And you want to know the best of this? If you write your own connector with Visual Studio it will automatically generate the swagger metadata for you! So in really 10 min you can have your brand new connector ready to use !

Added bonus : you can have automatically done for you a testing api made by swagger!

Azure website is full of references to quickly ramp up on this technology , so I want to give you some  useful tips in your app logic journey instead of a full tutorial.

Tip 1:  You will see that published connectors are requesting you some configuration values (Package Settings) and only after that the connector becomes usable in your logic app. I tried with no success to do the same in visual studio with a custom API app and the best that I was able to find is that you simulate this only if you use deploy script from github (look at azuredeploy.json file, some examples here ) , at this stage in fact with the deploy setup screen you can set some configuration values that will never change once your azure api app is published . The way this is done is to map deploy parameters with app settings like this:

"properties": {
"name": "[parameters('siteName')]",
"gatewaySiteName": "[parameters('gatewayName')]",
"serverFarmId": "[resourceId('Microsoft.Web/serverfarms', parameters('svcPlanName'))]",
 "siteConfig": {
  "appSettings": [
   {
   "name": "clientId",
   "value": "[parameters('MS_client_id')]"
   },
   {
   "name": "clientSecret",
   "value": "[parameters('MS_client_secret')]"
   }
  ]
 }
}

Then you can use the usual ConfigurationManager.Appsettings to read these values into the code.

I guess this will be fixed (possibility of defining package settings) when the publishing on marketplace will be available.

Tip 2: If you store credentials inside your custom api app please note that by default api app are published as public…. so if this particular api app reads from you health IOT device anyone in the world that knows or discovers API address can call this API and read your health data! So set security to internal and not public!

Tip 3: Browser Designer can be sometimes instable and produce not exactly what you were hoping from it, always check also the code view!
Tip 4: Azure API Apps have application settings editable on azure portal like the normal Azure “Web” Apps but they are hidden!
Look at this blog that saved me!

That’s it!