Creating a Conversational Bot with ChatGPT, MuleSoft, and Slack

Can we create a fully functional conversational bot that leverages the power of a Large Language Model (LLM)? The answer is a resounding yes!

In this post, we’ll guide you through the process of building a robust and interactive conversational bot from scratch. If you have a fresh OpenAI account, it’s possible to utilize 100% free accounts and software since OpenAI gives us $15 of credit to try it. If not, you must add credits to your OpenAI account, but it’s inexpensive for this sample app.

We’ll use MuleSoft, Slack, and the state-of-the-art ChatGPT to make it happen. Unlike traditional NLP systems, ChatGPT is an LLM designed to understand and generate human-like text. This makes it extremely useful for various language-processing tasks.

So, buckle up and join us as we reveal the secrets to creating an intelligent bot that leverages the advanced capabilities of ChatGPT, an LLM that can enhance team collaboration and productivity, and deliver a seamless user experience. Let’s dive in!

Note: The accounts and software used in this post could have some limitations since MuleSoft gives us trial accounts.

The main purpose it’s that you understand and learn the basics about:

  • Implementation of OpenAI REST API (we’ll be using ChatGPT-3.5-turbo model)
  • How to create a simple backend integration with Anypoint Studio.
  • How to realize an integration with Slack.

Pre-requirements

  • Anypoint Studio’s latest version.
    • Once you installed Anypoint Studio and created a new Mule Project, we need to install the Slack Connector, you just need to access the Anypoint Exchange tab, and then you will be able to search for and install the connector.
  • Anypoint Platform trial account, you can create a 30 days trial account.
  • A Slack Bot installed on a Channel.
  • An OpenAI account with available credit. Remember, OpenAI gives us $15 if it’s your first account. If you previously registered on the OpenAI platform, then you will need to add a balance to your account. However, following this guide and creating your sample application, will be really cheap.

Once we have everything installed and configured, we can proceed with getting the corresponding authorization tokens that we will need along with our integration. Save these in your mule-properties .yaml file.

OpenAI API Key

Once you have created your account on OpenAI, you will be able to access your account dashboard, where you will see a tab labeled “API Keys”. Here, you can generate your secret key to make requests to the OpenAI API. Simply click on “Create new secret key”, copy the key, and save it to a text file.

Slack Oauth

On your Slack application, you should have already configured your bot inside a channel on Slack. If you don’t know how to do it, you can follow this guide. On Bot’s scope configuration, enable ‘channels:read’, ‘chat:write:bot’, and ‘channels:history’. 

This screenshot it’s an example of how looks the interface, you will have your own client ID and Client Secret:

Configuration properties

You can use this sample file for your mule-properties .yaml file, you just need to replace your own KEYS and IDs.

The Integration

Now that we have our Bot created in Slack, and our API Key on the OpenAI dashboard, you start getting an idea about the roles of each system and which is the missing piece that connects them all, that’s right, it’s MuleSoft’s Anypoint Platform.

The Project Structure

The project is divided into a main flow, and 3 flows, divided according to functionality. We need to do some things between receiving and replying to a message from a user on Slack. Please see the image below, and each block’s explanation.

Main Flow

  1. This Mule flow listens for new messages in a Slack channel using the slack:on-new-message-trigger component. The channel is specified using the ${slack.conversationId} property. A scheduling strategy is set to run the flow every 5 seconds using the fixed-frequency component.
  2. Next, the flow checks if the message received is from a user and not from the bot itself. If the message is from the bot, the flow logs a message saying that it is the bot.
  3. The incoming message is then transformed using the DataWeave expression in the Transform Message component. The transformed message is stored in the incomingMessage variable, which contains the user, timestamp, and message text. 
    • If the message is from a user, the incomingMessage.message is checked to see if it equals “new”. If it does, the finish-existing-session-flow is invoked using the flow-ref component. If it doesn’t equal “new”, the check-session-flow is invoked with the target set to incomingMessage.

Overall, this flow handles incoming messages in a Slack channel and uses choice components to determine how to process the message based on its content and source.

The finish-existing-session-flow and check-session-flow are likely other flows in the application that handle the logic for finishing existing sessions or checking if a new session needs to be started.

Finish existing session flow

  • “Finish-existing-session-flow”: terminates the previous session created by the user.

Check session flow

This flow called “check-session-flow” checks if a user has an existing session or not, and if not, it creates one for the user. The flow follows the following steps:

  1. Check if a user has an existing session: This step checks if the user has an existing session by looking up the user’s ID in an object store called “tokenStore”.
  2. Check array messages user: This step checks the object store “store_messages_user” to see if there are any messages stored for the user.
  3. Choice Payload: This step uses a choice component to check if the payload returned from step 1 is true or not.
    • When Payload is true: If the payload from step 1 is true, this step retrieves the existing session ID from the “tokenStore” object store and sets it as a variable called “sessionId”. It also retrieves any messages stored for the user from the “store_messages_user” object store and sets them as a variable called “messageId”. Finally, it logs the “messageId” variable.
    • Otherwise: If the payload from step 1 is not true, this step sets a welcome message to the user and stores it in the “store_messages_user” object store. It generates a new session ID and stores it in the “tokenStore” object store. Finally, it sets the “sessionId” variable and generates a welcome message for the user in Slack format.
  4. At the end of the flow is where we interact with OpenAI API, calling a flow named “make-openai-request-flow”.

The steps in this flow ensure that a user’s session is properly handled and that messages are stored and retrieved correctly.

Make OpenAI request flow

The purpose of this flow is to take a user’s message from Slack, send it to OpenAI’s API for processing, and then return the response to the user via Slack. The flow can be broken down into the following steps:

  1. Transform the user’s message into a format that can be sent to OpenAI’s API. This transformation is done using DataWeave language in the “Transform Message” component. The transformed payload includes the user’s message, as well as additional data such as the OpenAI API model to use, and a default message to send if there is an error.
  2. Log the transformed payload using the “Logger” component. (Optional, was used to check if the payload was loaded correctly)
  3. Send an HTTP request to OpenAI’s API using the “Request to ChatGPT” component. This component includes the OpenAI API key as an HTTP header.
  4. Store the user’s message and OpenAI’s response in an object store using the “Store message user” component. This allows the application to retrieve the conversation history later. (please read more about this on OpenAI documentation. This will help to keep the conversation context that a user has with ChatGPT since messages are stored with roles: “user” and “assistant”.).
  5. Transform the OpenAI response into a format that can be sent to Slack using the “Make JSON to send through Slack” component. This component creates a JSON payload that includes the user’s original message, the OpenAI response, and formatting information for Slack.
  6. Send the Slack payload as an ephemeral message to the user using the “send answer from chatGPT to Slack” component.
  7. As the final step, we delete the original message sent by the user, as we are using ‘Ephemeral messages’, since the Bot is deployed on a channel, the messages are public, with ‘Ephemeral messages’ we can improve the privacy on the messages sent on the Slack channel.
    1. Create a payload to delete the original message from Slack using the “payload to delete sent messages” component.
    2. Send a request to delete the original message from Slack using the “delete sent message” component. 

By following these steps, the MuleSoft application can take a user’s message from Slack, send it to OpenAI’s API, and return the response to the user via Slack, while also storing the conversation history for later use.

This was created and tested with these versions:
Mule Runtime v4.4.0
Anypoint Studio v7.14
Slack Connector v1.0.16

Oktana is a SOC 2 Certified Salesforce Partner

As members of the Salesforce ecosystem, we are all aware Trust is the #1 core value of Salesforce. Customers trust data stored in Salesforce is secure. This expectation of trust naturally extends to any partner accessing a company’s Salesforce instance and connected systems.

Oktana is a SOC 2 Certified Salesforce Partner

Oktana is proud to have maintained SOC 2 Type II certification since 2021, which allows us to provide the assurance we meet the highest data security standards. Since 87% of our business over the past three years is within the High Tech industry, including Healthtech and Fintech, this certification also enables our customers to maintain their compliance certification as we meet vendor security requirements.

What is SOC 2 certification?

SOC 2 is a voluntary compliance standard for service organizations, developed by the American Institute of CPAs (AICPA), which specifies how organizations should manage customer data. The standard is based on what they call “Trust Services Criteria”, covering these core concepts:

  • Security – Your data is managed and stored securely
  • Availability – Your data is always available
  • Processing Integrity – Your data remains intact at all times
  • Confidentiality – Your data is treated confidentially
  • Privacy – Your data is not exposed when not necessary

To maintain our SOC 2 certification, we are audited against a set of security controls supporting these Trust Services Criteria.

Why should you care?

To Oktana, this is the bare minimum a Salesforce partner can provide to you given the sensitivity and importance of the data you store in Salesforce. A SOC 2 certified Salesforce partner confirms they will respect your data and help you provide the same level of trust Salesforce provides to you, to your customers.

Here are some of the benefits of  working with a SOC 2 certified Salesforce partner:

  • Peace of mind and confidence in data security

By choosing Oktana as your Salesforce partner, you can rest assured we are taking active steps to protect your data. SOC 2 certification is an additional guarantee that we are committed to our customer’s data security and that we have implemented appropriate security controls to protect it, including training our team members.

  • Regulatory compliance 

To meet your own regulatory requirements, you may need to require vendors to be SOC 2 certified. By working with Oktana on your Salesforce implementation, you can be sure we meet the necessary bar to enable you to comply with your regulatory requirements.

  • Risk reduction 

By working with a SOC 2 certified Salesforce partner, you can be sure we have taken proactive measures to protect your data and reduce the risk of data security breaches and associated costs. In line with this, we work with you to ensure your proprietary data does not enter Oktana’s systems. We will use your project management software and repositories and, if you prefer, your VPN and hardware.

  • Competitive advantage 

By choosing to work with a SOC 2 certified provider, you can differentiate your company from the competition and improve your reputation and the trust of their own customers.

Our compliance program is robust which has enabled us to work with regulated industries including public sector at both the state and federal levels. In addition to being SOC 2 certified, we can provide onshore resources to meet other compliance requirements. To learn more, check out our Trust page.

Mastering Acceptance Criteria: 3 tips to write like a Professional

What is Acceptance Criteria?

Acceptance Criteria is a term used in software projects for a deliverable that will list a set of pre-defined requirements in relation to a product, service, or feature. Such criteria must be met or approved so that it can be ultimately “accepted” by the end-user and therefore become a functional part of the organization’s solution or software. The criteria is specific to each User Story in Agile project management and are elaborated during planning phases so that once defined, the development team can use it as a guide for implementation and testing. It is highly recommended to have detailed and measurable criteria that is clear to all involved so that measurable outcome is obtainable.

Why is Acceptance Criteria necessary?

Writing requirements in the form of “Acceptance Criteria” has become the current day norm in Agile projects. Crafting these requirements as digestible deliverables is an integral way to help with a successful implementation. In doing so, using acceptance criteria is standard practice for requirement documentation and can easily align different teams to hold a common understanding of the ask. 

It is extremely important that cross-functional teams hold a shared understanding since collaborating together highlights that they all have their own unique backgrounds, ideas, and interpretations which can lead to misalignment.  Moreover, writing acceptance criteria can vary greatly per the author’s unique writing style. This is particularly evident on large projects when multiple individuals work on producing acceptance criteria.

Needless to say, we all have our own preference on how to write, but it’s important to remember that writing acceptance criteria is a skill that can always be refined and improved upon with the ultimate goal of producing a document that reduces implementation ambiguity, that is clear to understand by all parties involved and provides value to the project.

Acceptance Criteria & User Stories

The skill of writing user stories is well defined: understand the project scope, work on your personas, follow the INVEST mnemonic and you’re pretty much set. On the other hand, acceptance criteria is much broader and “open” in terms of definition. There is often a gap between theory and practice. Whilst working on requirement analysis, the real world often presents time constraints, no well-defined scope, and a lack of stakeholder engagement. User stories can reflect a specific goal but the acceptance criteria needs to showcase the behavior in detail so that the user story can be achieved. 

As a Business Analyst in software projects, I am involved during all phases of design and implementation. However, countless times I have seen the expectation of having reached a shared understanding become dismantled at all stages of the project. 

There are many resources out there that cover best practices, but I want to emphasize the importance of actively listening to questions or feedback when reviewing acceptance criteria with the scrum team. This is a critical aspect to know whether it is well written and achieves the goal of the user story. Nailing down a good set of acceptance criteria is a challenge and finding that sweet spot can make your requirements a masterpiece for the team. 

The Goldilocks Principle

What is the Goldilocks principle?

The Goldilocks story can help us think about finding that middle ground on how to write effective acceptance criteria that is dependent on each project’s particular goals and needs. Aside from the blatant issue of home invasion, Goldilocks does teach an important lesson. Namely, when doing something, nothing at the extremes was “right,” not eating the porridge, sitting in the chairs, or sleeping in the beds. Yes, this might have been intended for you always to seek out that ideal balance in life, but also let’s think about it when writing acceptance. Too vague and it becomes a solution nightmare. Too detailed and it complicates the wiggle room needed for “issues” or design/tech constraints that occasionally pop up. Too lengthy can make it hard for QA to test effectively but too short might not reflect any implementation needs. 

However, let’s go a step further. We don’t always have time to write well, and sometimes we don’t know the clear scope or vision of the stakeholders but have to start anyway, we might not have a strong technical lead, or we might not have access to the right stakeholders to get the information we need. These constraints can lead to unclear and difficult-to-read acceptance criteria.

In the past, I have assessed the project risks applicable to writing acceptance criteria, similar to the ones mentioned above, and devised a strategy on how I can best write them so that they become a valuable key piece of work used by the team.

Tips and recommendations to write good acceptance criteria​

  • Assess your timeline to deliver the acceptance criteria.

An aggressive timeline requires a fast output and more generic criteria. Details will need to be fleshed out further when reviewing the user stories (during scrum team refinement) and possibly during implementation. Not ideal, but is a real-world scenario.

A lengthy timeline gives more time to work alongside stakeholders and fully understand the requirement and its context. We should work on supporting documentation like process flows or designs to assist teams to understand the written criteria.

  • Understand the project’s complexity.

A straightforward project involving simple development work and design gives us the opportunity to write in detail (always respect best practices – like 8-10 max criteria per user story) and call out the specifics. Such as errors, exceptions, or alternate behavior.

A highly complex implementation often involves integration/s, whereby it can actually be more beneficial to write more generically with the key details only since during development unforeseen limitations always arise. Work with what you know as a basis and any underlying constraints will naturally come to the surface.

  • The audience: get to know your stakeholders, their engagement, and how invested they are.

If stakeholders do not display much product knowledge or are not very helpful in defining the requirement, they might need you to aid their decision-making. This is an extremely common issue in projects. If this is the case, writing acceptance criteria needs to be clear to them so as not to include too much technical jargon. This can be elaborated with the development team during refinement.

However, if stakeholders are overly involved and have a technical background, this might help you get what you need but they should not solve the “how” a criteria needs to be met. Here, we need to stick to writing acceptance criteria as statements and not describing how something needs to be achieved – even how obvious this can be.

Conclusion

All in all, we can add a lot of value to a project when writing acceptance criteria while also taking into consideration all the particulars and risks of a project. This analysis can be used before investing time, effort, and potentially rework to examine how you’re going to tackle writing the acceptance criteria. Always remember, although this is a generalization, it can help to get those acceptance criteria just right.

 

Check out our staff augmentation services

Tactical Performance Augmentation

The Great Problem in Software Development

The term “Staff Augmentation” was first coined by Gartner, Inc. in 1988. When I started in the IT staff augmentation services industry a few years ago while I was at FullStack Labs, I realized a lot has changed since then. Recently having joined Oktana, this is compounded by the need for talent with skills across both fullstack and Salesforce technologies.

Despite many options available for people to learn to code, including online training and internships, demand has outpaced supply for several decades. There are endless commentaries on the subject, but a 2022 Korn Ferry study cited, “In tech alone, the US could lose out on $162 billion worth of revenues annually unless it finds more high-tech workers.” That’s alarming.

Not only are organizations missing out on revenue but also opportunities to innovate. According to Gartner, “IT executives see the talent shortage as the most significant adoption barrier to 64% of emerging technologies, compared with just 4% in 2020, according to a new survey from Gartner, Inc. A lack of talent availability was cited far more often than other barriers this year, such as implementation cost (29%) or security risk (7%).”

Gartner also shared that the pace of employee turnover is forecast to be 50–75% higher than companies have experienced previously, and the issue is compounded by it taking 18% longer to fill roles today than pre-pandemic.

Obviously, the need for IT staff augmentation has only increased, but traditional solutions may not be good enough for modern challenges.

Traditional Solutions

Due to the everpresent demand that has outstripped supply, organizations across all industries have attempted to find and provide solutions to fill this gap in supply and demand offshore, nearshore and onshore. In the IT sector, these solutions typically cover:

Modern Challenges

These traditional models do provide a modicum of success but, for a number of reasons, there are typically additional challenges – both tactical and performance-related – with these models and many of the vendors that employ these models.

 

Time Zones

There are bountiful offshore resources throughout Asia or Eastern Europe which, if you’re based in the U.S., unfortunately do not provide natural overlap with your team’s working hours. This is a massive issue for productivity as it creates a collaboration barrier that does not support real-time context, problem solving or efficient planning. Even when resources commit to U.S. working hours, this inevitably leads to a day-to-day imbalance in energy and motivation across your team.

 

Language & Culture

Both language and culture have the potential to create communication issues. Many offshore resources either don’t speak English or don’t speak English well enough to communicate efficiently with U.S. clients which results in time loss as gaps in communication around the basics are the concentration rather than the work itself. Cultural norms also differ across the globe and can impact a team’s sense of urgency and willingness to share the true status of your project.

 

Trust

With vendors across the globe, it’s increasingly difficult to find a vendor with the same level of compliance and governance standards as your organization.

 

Length of Service Contracts

Many consultancies require a minimum contract term of 12-18 months or longer. It is hard for you to anticipate your future team size and skill needs up to 18 months in advance, so often clients will over-deploy contract resources. This usually results in excess capital expense and wasted time finding work to “keep the team busy” towards the end of the engagement.

 

Low Initial Visibility

Many vendors require clients to evaluate each candidate based on lengthy CVs and sometimes conduct their own technical interviews. This requires extra time to figure out how to evaluate contracted resources that have already been evaluated by the vendor which results in wasted productivity by your engineering leadership team. The main issue is a lack of consumable content to confirm the team member has the skills to deliver the project and a personality that will mesh well with the culture of your team. This results in a very slow onboarding process, akin to internal hiring when it should, in fact, be simple and much faster.

 

Attrition

High turnover is a big problem when resources feel “left on an island”, increasingly isolated from their consultancy over time, which impacts performance. While not part of their client’s org, and feeling separated from their consultancy, they often leave for a new organization as they are typically unable, contractually, to be hired directly by you.

 

Lack of Career Development

A cause of attrition, frequently vendors lack the infrastructure or incentive to invest in the career development of resources already placed on projects which puts the burden of training and career coaching on you, or leaves the individual team member to guide their path, which impacts overall performance.

What is Tactical Performance Augmentation?

When we realized the common problems above, and saw the challenges the market faces, we thought “What if we could leverage our current way of working to solve all of those problems with a new service product that is leaps and bounds ahead of anything offered by another firm?” Thus, Tactical Performance Augmentation (TPA) was born.

This new model addresses many of the market frustrations currently faced in IT staff augmentation such as slow provisioning of resources, lack of transparency in billing and inflexible length-of-service contracts.

Tactical

Tactical Performance Augmentation allows you to build very specific and focused teams with exactly the right number of people with the right skills. This is the “Tactical” part of TPA and provides maximum flexibility to deliver exactly what you need, when you need it.

Using the TPA model, you structure your team with the combination of developers, quality engineers, scrum masters, UI/UX designers you need at a given time. As your needs change, you can reduce, adjust or increase those resources. 

The best piece of the tactical component is that this is done without the pressure or limitations of a length-of-service contract but instead on a month-to-month basis. You only pay for the team for the exact amount of time you need the team. 

Performance

When we evaluated the market, one of the biggest concerns across IT organizations was the lack of a performance-based offerings by vendors. There was no way for you to measure the ROI of a vendor’s resources on your project, or their efficacy of one vendor against another vendor in your portfolio.

We realized the ability to provide a true Time & Materials cost model is a necessary component in providing performance-based IT staff augmentation. It’s on the vendor to keep performing to earn your business.

That’s the “Performance” component of Tactical Performance Augmentation. Either a vendor delivers on your project, and makes it easy to work with them, or they don’t.

Solve Your IT Staff Augmentation Challenges

Oktana has provided tactical team augmentation services to customers for many years, with very high satisfaction scores and positive feedback due to a performance-based model. 

Historically, this augmentation model was utilized after large-scale project delivery engagements. Through Oktana, we have opened access to Tactical Performance Augmentation, our highly transformative approach to staff augmentation, to new customers.

Tactical

The most important aspect of the tactical component of TPA is flexibility. Oktana has always been flexible, working with each customer to help find the perfect team composition based on the work they need to get done. 

We can also address many of the other more tactical challenges companies face when hiring vendors to provide IT staff augmentation services.

Time Zones

We provide nearshore and onshore resources. With locations throughout the Americas, we provide resources with a generous overlap across shared time zones.

Language & Culture 

Our team speaks English, whether native or in addition to Spanish. Though each country has its own unique culture, many of the cultural norms throughout the Americas are shared, or complementary, which makes working as one team a smooth, and fun, experience.

Trust

With onshore resources in addition to nearshore resources and SOC 2 certification, Oktana is a natural fit for organizations with high compliance and governance standards. Our compliance program.

 

Transparent Team Visibility

We provide our customers with transparent Team Transparency Profiles to introduce you to the team we have selected for your project based on their skills, experience, and interest. You will get to know your team as one team and meet them to ensure they are the right fit for your team.

 

Month-to-Month Contracts

We know your needs change. Again, this goes back to flexibility. Set the end date in advance, or simply give us 45 days notice when you want to roll one resource, or the whole team, down.


Performance

We earn your business. We have always provided our customers with a true Time & Materials cost model, meaning, if a resource works 7 hours and 45 minutes on a given day, then that is what we bill you.

This enables you to see how many hours of work got done at the end of a sprint, epic or month, versus how much we invoiced you for that resource, to more accurately calculate velocity. We call it the “Performance-to-Spend ROI”. If you’re not happy with that ROI metric, simply give notice that you would like to transition a specific team member off your project.

It’s on us as a vendor to keep performing to earn your business. We give you the model, tools and process to measure our efficacy against other vendors in your portfolio – or against your own team – to make sure that it makes sense to continue to pay for our service.

Just as we addressed many of the additional tactical challenges, we want to also address some of the additional performance-based issues we know companies face when hiring vendors to provide IT staff augmentation services.

 

Retention

To minimize any of our team members feeling they are on an island, isolated, we build your team in such a way that they will have the support they need. We encourage our customers to treat them as they would treat their own employees to build a connection to their business, just as we make efforts to ensure everyone knows they are what makes Oktana special. We are committed to our team and they are treated with respect and care which carries over to your organization.

 

Career Growth 

We invest in our team’s continuous growth. Our team members complete extensive onboarding training to meet our standards, which customers should trust, and this continues as their career grows with us. Our training team works with each team member to guide a path that enables them to excel on your project and mature in their field while our resourcing team ensures team members move to projects that support their career growth.

 

Why you should Integrate Shopify with Marketing Cloud

Choosing the right platform to build your online business is a very important decision to make. Power technology infrastructure must be integrated into your site to support the customer buying journey and drive your customers toward a conversion. Integrating Shopify with Salesforce Marketing Cloud will help your business increase customer engagement, provide a personalized experience, and much more benefits that will help you generate conversions and optimize the journey of your customers.

How does it work?

In order to integrate Shopify and Marketing Cloud, you will need MuleSoft, to help exchange customers, products, and order data between both platforms. A tracking code is installed in your Shopify store that syncs product views and other actions taken in your store with Marketing Cloud. This allows you to generate reports, send customized emails throughout the entire shopping cycle, and have a 360-degree view of your customers so your team can create a more personalized experience, resulting in more engagement, more sales, and less investment.

overview shopify and salesforce marketing cloud demo

Benefits of the Integration

Personalized Shopping Suggestions Powered through Einstein AI

Every time your customer creates a Shopify account, they’re automatically added to your database so you can decide (depending on their behavior) the best email campaign or flows to include them in.

For example: Bob creates an account in your company’s online store and then he made a purchase. Now he receives a personalized order confirmation email with the details of his purchase and additional product recommendations based on his shopping interests.

These product recommendations are generated through Einstein AI, with customer data such as previous purchases, browsing history, and other significant shopping patterns.

Abandoned Cart Reminders Personalized with Shopify Data

With this integration, your customers can receive a customized email whenever they added products to their shopping cart, but didn’t complete the purchase. This feature allows you to configure personalized flows, so you won’t have to worry about losing a sale. Simply set up the amount of time you wish to remind your customers about their pending shopping carts, and the email will be automatically sent from Marketing Cloud.

Create Custom Marketing Campaign Flows

With Marketing Cloud, you can easily create and customize marketing campaign flows to reach your customers throughout different stages of the buying cycle. As mentioned above, you can create flows for abandoned shopping carts and email purchases. Flow can also be configured for newsletters, sales promotions and whatever email marketing approach fits your strategy.

How does a Guided Send work in Marketing Cloud?

  1. Create the email template and define your audience.
  2. Select the data extension option so you’re able to send an email to all of your target audience.
  3. Finally, select the email address from which the campaign would be sent
    Just follow these three simple steps, and you too can send email campaigns to your selected Shopify subscribers.

Track Success in Real-Time with Marketing Cloud Dashboard

With the Marketing Cloud dashboard, you get all the information you need to track the success of your campaigns in one centralized dashboard. With this integration, you and your team can access real-time performance results and statistical Shopify data for all of your campaigns on every email sent.

Some of the information you can access includes:

  • Behavior: clicks, forwards, and purchases.
  • Conversions and new subscribers
  • Email Activity: analyze which section performs better based on real-time results.
  • Delivery Data: number of emails sent and bounce rate
  • Engagement Data: open rate and click-to-open rate
  • Email Performance Data: conversion funnel and engagement distribution
  • Insights on best-performing emails, subscriber journeys, and engagement histories

Is this something you or your company need?

We have a team of experts ready to help you. Our technical team has partnered on multi-product, cross-cloud integrations with over 8 years of experience building on the Salesforce platform. We are experienced and confident in implementing third-party integrations. We can connect any application, technology, or system to Salesforce.

If you have any doubts about this integration being the right fit for your organization, don’t hesitate to contact our Sales team at [email protected] for more information.

Dreamforce 2022 Recap: Salesforce Genie, Slack Canvas, and more!

This year marked the 20th anniversary of Dreamforce and Salesforce spared no expense making it one of the most highly anticipated tech conferences in the country. This Dreamforce was the first in-person conference that Salesforce has held since the start of the pandemic. More than 40k individuals attended the annual conference in San Franciso and over 150k streamed the event online. During this annual event, Salesforce delivered significant tech innovations and product announcements that will surely mark a new chapter in their history.

As you can see, over the last years, Salesforce has released innovations to the Salesforce ecosystem, and they will continue to.

Dreamforce 22 innovation
Source: Salesforce

Salesforce Genie: Real-Time Data Integration across all Clouds

What is Salesforce Genie?

David Schmaier, Salesforce President and Chief Product Officer described the launch of Genie as “The most significant change to the Salesforce platform in the company’s history.”

Salesforce Genie is a new Customer Data Platform (CDP) that offers the ability to provide highly personalized customer experiences hyper-scaled to every single part of your business in real-time.

Organizations use many applications to run their business resulting in customer data being disconnected which can make customer experiences repetitive. Genie combs data sources across applications, ensuring teams will have every piece of customer information available at any point it’s needed. 

Genie is more than just a data integration layer. Natively built within the Salesforce platform, it combines Einstein AI, machine learning, and the Salesforce flow. It opens up all kinds of automation possibilities by allowing data to flow faster and more freely. 

Source: Salesforce

Salesforce Genie’s Main Characteristics

  1. Connect all your data and historical data sources in real time.
  2. Salesforce Genie democratizes access to real-time data across every cloud. It enables all your teams — across sales, service, marketing, commerce, and more — to adapt their experiences in real-time to whatever is going on in your customer’s world.
  3. With harmonization built-in, Salesforce can resolve all this data into a single view of the people behind it and create a real-time Customer Graph of all your customers or interactions. 
  4. Salesforce Genie stores all this data using a lakehouse architecture. This makes it easier to categorize and classify the unstructured data that businesses rely on; as a result, Salesforce Genie accesses all this data faster and puts it to work for you.

To understand this new platform, look at Salesforce Genie Trailhead.

Source: Salesforce

Slack Innovations

Transform Productivity with Slack Canvas

Slack canvas is a new surface in the digital HQ that will transform how teams can organize and share critical resources. Slack canvas saves time by helping workers find essential information quickly within Slack chats. This new tool will be available next year. 

Ali Rayl, Senior Vice-President of Product Management at Slack, commented: “Canvases enhance the real-time collaboration you have in Slack channels by offering a set place to organize and share information of any kind in ways that make people more productive and efficient.”

Source: Salesforce

Some Slack canvas features:

  • Sharing essential files, such as account plans, executive briefing docs, and team contacts
  • Creating a curated list of relevant channels the team needs to reference to support their account 
  • Implementing common workflows, like reporting a customer issue to an engineer or approving requests
  • Accessing opportunity data from Salesforce Sales Cloud, plus usage and spend data
  • Create a snapshot of all the data teammates need, code-free. 
  • And much more!

New Slack Huddles Feature

Huddles enhance virtual conversations and optimize how the teams collaborate and make decisions without leaving Slack.

Slack Huddles New Capabilities:

  • Lightweight video
  • Multiperson screen sharing, drawing, and cursors
  • Emoji, reactions, and stickers
  • Information shared during a huddle gets automatically saved in the channel or DM 
  • And much more!

Purchase Carbon Credits through New Net Zero Marketplace (Coming Soon!)

Now in the Net Zero Cloud, you can find Net Zero Marketplace, a climate action hub and a trusted site for organizations to purchase carbon credits. You can sign up to be the first to know when it goes live. For now, only organizations based in the US will be able to make carbon credit purchases initially, to soon expand to additional regions.

Source: Salesforce

These were some of the latest Salesforce innovations, take a look at the Dreamforce 2022 Main Keynote to learn more details:

Dreamforce & The Oktana Team

As we mentioned in our Dreamforce guide, the Oktana management team was present at Dreamforce. We chatted with multiple companies about Salesforce technologies, languages, frameworks, and ways we could help them with digital transformation. 

If you couldn’t schedule a meeting with our team, you can email us at [email protected] to get in touch.

The announcements made at Dreamforce 2022 will impact the organizations positively. As a Salesforce Summit Consulting Partner specializing in Service Cloud, Experience Cloud, and development across the Customer 360 Platform, these innovations help us enhance businesses to accelerate their growth.

Integrate all your teams with Salesforce Customer 360 Platform

What is the Customer 360 Platform? 

Salesforce Customer 360 is the core Salesforce Platform that connects your marketing, sales, commerce, service, and IT teams with a single view of your customer data, helping you grow relationships with your customers and your employees. It includes a full range of Salesforce products that help your company keep everything connected in one system.

INTEGRATE ALL YOUR TEAMS WITH A SINGLE VIEW PLATFORM

Discover what you can do with the Customer 360 Platform

Salesforce Customer 360 Platform is built to work with countless applications to provide a seamless customer experience that allows you to run your organization smoothly. 

One of the ways you can integrate the Customer 360 Platform with other programs is by using Salesforce Appexchange, a marketplace for apps that can be installed easily into your Salesforce org. Integration can also be done with MuleSoft Anypoint Platform, you can connect any system, application, data, and device to unleash the full power of Customer 360.

Customer 360 has a solution for every phase of your customer’s journey. The more teams you unite, the more you know, and the better you grow.

So, what are the teams you can unite under this scalable CRM platform? Sales Cloud, Service Cloud, Marketing Cloud, Experience Cloud, Slack, and Commerce Cloud just to name a few. 

Customer 360 gives everyone in your company the ability to access crucial data and make smarter, faster decisions.

Discover what you can do with the Customer 360 Platform

We have helped our customers take full advantage of Customer 360 Platform, integrating it with over 30 different technologies. Here are some of the success stories:

 

External API developer community

Our customer, a large installer of interior finishes for US homebuilders, needed help to grow their external API developer community to promote integrations with builders, suppliers, and vendor partners.

In order to create a new community to support the growth of API for buyers and prepare the way for future APIs, we used MuleSoft Anypoint API Community Manager. Our team worked closely with the customer to create a developer portal that showcases the API with technical and business documentation, and a mocking service to allow developers to try the API on the site. 

Through the Experience Cloud, we were able to customize the community theme to ensure it remains consistent with the overall brand. Using Salesforce CMS cards throughout, we could simplify management of the site’s content and provide a space to showcase APIs in development (i.e. “Coming soon”).  

Results

  • Launched the developer portal and gave access to builders which reduced their onboarding time by 80%

Streamlines Data Management with Salesforce

Our customer, a Professional services company specializing in Salesforce digital transformations, identified an opportunity to create an abstraction layer on Salesforce to simplify and streamline data entry and management for their clients.

Our client partnered with us to build a new application using Visualforce, React, ts-force, and TypeScript to give their clients a better interface to manage events, opportunities, leads, and comments. This new Salesforce application provides a simplified user interface, unifying management for all 4 object types on one page (translated in both English and Japanese). 

Results

  • The key partners and customers will start using the product as soon as the MVP of the product launches. 
  • Companies who use Salesforce to create meeting minutes, action items, input their sales activities and customer details, saved more than 50% of their time.

If you are interested in reading more about our success stories, we recommend you check out Oktana’s Success Stories. 

Oktana achieved ‘Expert’ status for the Customer 360 Platform specialization

 

In 2022 Oktana achieved ‘Expert’ status for the Customer 360 Platform specialization in the Salesforce Partner Navigator program, the highest level for the category. 

As a software development company that helps customers innovate, 100% of our Salesforce projects use at least one of the Customer 360 specializations. Most of these projects require implementation of other Salesforce clouds that we have vast expertise, such as Experience Cloud, Service Cloud, and MuleSoft. 

The Customer 360 Salesforce Specialization requires demonstrated and validated expertise in eight Salesforce Customer 360 areas. Through our customer projects, established demonstrated knowledge and expertise in AppBuilder, Heroku, Integration Services, JavaScript Designer, Mobile, Platform, Process Automation and Security & Privacy.

Did you know we are also Salesforce Summit Partners? Check out how we achieved Summit (previously known as Platinum).

Oktana Salesforce Summit Partner

Oktana was recently recognized for reaching Salesforce Summit Partner status (previously known as Platinum) and for achieving “Expert” status recognition for Salesforce Customer 360 Platform and Experience Cloud in the Salesforce Partner Navigator program.

Oktana Salesforce Summit Partner

The Salesforce Navigator program allows partners to differentiate themselves and showcase their expertise. Navigator combines three aspects of a partner’s implementation and services experience into a measure of expertise that is Salesforce-validated and verified. (You can see our current recognition in the Oktana AppExchange listing.)

The evaluation criteria rates the partner’s product and industry expertise based on these measurements:

  • Knowledge: Salesforce certifications 
  • Experience: Completed projects
  • Quality: Customer satisfaction score 

Navigator has three possible levels of expertise: 

  • Level I demonstrates knowledge and capacity to produce customer success consistently.
  • Level II showcases that the partner has the project delivery capacity while maintaining high standards of customer success
  • Expert is the highest level of experience proven leaders in the area of domain. 

About Oktana

We have vast experience working with the Salesforce Platform and related technologies. Since 2014, we have partnered with more than 250 companies of all sizes and industries, and we have achieved more than 450 Salesforce Certifications (and numerous non-Salesforce certifications!). 

Our team is distributed across Bolivia, Colombia, Ecuador, Paraguay, Peru, Uruguay, West Virginia and across the US, with developers, testers, designers, project managers, business analysts, and architects. From day one, we provide training and certification opportunities that enable our team to build expertise across the Salesforce ecosystem and a range of languages, frameworks, and platforms.

We have completed more than 700 projects using Salesforce technologies, allowing Salesforce to recognize our expertise through their Navigator program.

Our current Salesforce expertise

Customer 360 Platform – Expert: Customer 360 connects marketing, sales, commerce, service, and IT teams with a single view of your customer data, helping you grow relationships with your customers and your employees.

Experience Cloud – Expert: Salesforce Experience Cloud, formerly known as Community Cloud, helps companies quickly build connected digital experiences for their customers, partners, and employees at scale.

Einstein – Level II: The first comprehensive AI for CRM. An integrated set of AI technologies makes Salesforce Customer 360 smarter and brings AI to companies everywhere.

Service Cloud – Level II: Service Cloud allows your companies to deliver service to every customer, anytime, anywhere. It is a part of Salesforce’s Customer Success Platform, an ecosystem of connected mobile and social tools powered by the cloud.

PDO/Appexchange – Level II: We have in-depth knowledge in building commercial applications for the AppExchange, we can help you design the product and work in isolated key areas of your application. Currently, we have built one app and 12 components on the AppExchange

 

Industry Products – Level I: Salesforce industry clouds provide out-of-the-box industry-relevant innovation to speed up implementation and customer success, and create a unified experience that deepens relationships across lines of business.

Nonprofit Cloud – Level I: Nonprofit Cloud is an end-to-end platform designed for fundraising organizations, educational institutions, and other nonprofit entities to expand their reach digitally, deepen their connections, and streamline their internal management by keeping track of the people they work with.

Sales Cloud – Level I: Sales Cloud is a fully customizable product that brings all your customer information together in an integrated platform that incorporates marketing, lead generation, sales, customer service, and business analytics.

What does this mean for you?

We can confidently say that we are Salesforce experts recognized for guiding our customers towards success. 

As a Salesforce Summit Partner and certified experts for Customer 360 Platform and Experience Cloud, we provide digital strategies, technical architects, and Salesforce solutions that will help your company innovate and grow. 

We can guide you in finding the right solutions for your business. Check out our latest project updates, where we have built customized solutions for our clients:

More about our Salesforce Integration and Custom Development services.

Salesforce TDD (Test-Driven Development)

Hi, I’m Diego and I have several years (I prefer not to say how many, but let’s say “enough”) working in Salesforce. I am also an Agile enthusiast and lover of applying related techniques throughout my work.

I’ve found test-driven development (TDD) can be a great technique for building robust software, but when we work in Salesforce, I find some constraints or restrictions can make it frustrating. In this post, I’m going to show when and how I use TDD while coding in Salesforce.

Disclaimer: The following is written on the basis of what has worked for me in the past. This is not intended to be a formal or exhaustive description. Use it as you see fit, I do not take any responsibility if you screw it up! 🙂

Salesforce TDD (Test-Driven Development)

Let’s start at the beginning:

What is TDD?

TDD is an Agile development technique that requires you to write a failing unit test before writing any “production code.”

How are you supposed to do TDD?

First I’ll describe how TDD is done in general (this is the way to go in languages like Java).

  1. Write a unit test and make it fail (a compilation error is considered a failing test). Write no more lines of code than needed.
  2. Write the least possible production code lines to make the test pass (fixing a compilation error is considered a pass test).
  3. Refactor the code.
  4. Repeat until you are done.

Let’s check out an example in Java so you see how it works. In this example, we wanna create an advanced calculator of integers.

 

We work in split view when doing TDD

Round 1

Let’s write a failing unit test:

Oops, MyCalculator is not defined yet, compilation issue…therefore, it is a failing test.

Let’s make the test pass:

Compilation problem fixed! The test is passing again. Woohoo!

No tons of code to refactor. 

Round 2

Let’s continue with that test to make it fail again.

Mmm…getOpposite is not defined, ergo compilation issue, ergo failing test.

Let’s fix that. Let’s write the minimum code to fix the test:

getOpposite is defined and returns 0 to any parameter (in particular, 0). Test is passing again!!!

Let’s refactor.

We still don’t have much code to refactor, but there are some name changes we could use to make code easier to read ( yup, yup, yup…unit test code is code, too).

Much better now! 😀

Round 3

Let’s add a new minimum test to fail.

Right now, getOpposite returns 0 to any parameter… it’s a fail!

Let’s write the minimum code required to make the test pass.

Yay! It’s green again! Let’s continue.

Round 4

Let’s add a new failing test.

Last test fail (we are return 0 to any value different than 1), so now we need to write the minimum code to fix this test:

Test is passing again… but this solution is not good, let’s refactor.

Tests are still passing and we solve all the cases! We are done! Well, not actually, we still need to document, test more, write more tests and write even more tests…but we’re on the right path.

I expect this silly example gives you a feel for what TDD is and how it is done.

Now, let’s continue with the discussion, focused on Salesforce.

TDD Advantages

  • Code coverage: We get great code coverage without even thinking about it.
  • Testability: The code is designed to be testable by default (we are actually testing every time we change something).
  • Easier to refactor: We should not change or refactor code without having a set of tests we can lean on. As we are constantly writing tests that we know are able to catch bugs (we make it fail at the beginning), we know that we have a set we can rely on.
  • “Better” code: We are constantly refactoring the code, striving for the best possible code.
  • Predictability: After we finish a “round,” we are completely sure the code is working as we designed it to work and we know we didn’t break anything. We can say we have “working software.”
  • Prevents useless work in Salesforce: In Salesforce, aside from Apex, we have plenty of options to make changes like triggers, workflow rules, process builder, etc. Imagine that after we write a test that changes a value on a contact record, it passes. We could discover that there is another moving part that is taking care of that change (or we wrote the test badly).
  • Documentation: Tests are a great tool to communicate with other developers (or the future you) how, for example, a class API should be called and the expected results of each case.

TDD Disadvantages

  • Overtrust: It happens to me that, as we are testing continuously and we are getting test green, I sometimes have a feeling that the code is working perfectly…but it doesn’t mean it is. We may miss, or simply get lazy, and leave a case out of the unit test.
  • Slow in Salesforce: TDD is designed based on the theory that compiling or writing a test is really fast (a jUnit unit test has to run in less than 1ms). In Salesforce, we need several seconds to compile (the code is compiled on the server) and several more seconds to run the test. In my opinion, this is usually 10+ seconds. As we are compiling and running tests constantly, we add several minutes of “waiting for Salesforce.” However, this can be mitigated if you think you will need to write/compile/execute tests later anyway – you might as well do it upfront.

 

 

Me when I realize the QA found a case I had not considered when I was doing TDD

I will (probably) use TDD when...

In general, I’ve found that TDD is a great tool in several circumstances and I tend to do it almost always in the below cases.

  • Back-end bug fixes: Doing TDD in this context has two big advantages. First, you make sure you are able to reproduce the bug consistently. Second, and even more important, as you are writing a test specific to the bug, you know you will never introduce that bug again.
  • Back-end work with clear requirements and a clear implementation strategy: In this context, writing tests is going to be easy and implementing the production code will be easy, too, as you know where you are heading when you create the test cases.
  • Back-end work with clear requirements and minor implementation unknowns: In this context, the test is easy to write and the production code may be getting clearer as you move into cases.
  • Back-end work with some requirements discovery: Imagine in our calculator example you write a test to divide by zero and you realize you’ve never discussed that case with the BA. TDD helps you discover opportunities to clarify requirements.

I might do TDD, but it’s unlikely...

  • As part of requirements discovery: You could write unit tests as part of requirements discovery, and discuss it with your stakeholders, BA, or other technical people, but you probably have better techniques to support this process.
  • Front-end work: I’m gonna discuss this briefly later, when we talk about Lightning web components.

I will never do TDD when

  • I’m doing a prototype: By definition, a prototype or PoC should be discarded after we show it, so I code it as fast as I can, focused on demonstrating the core functionality.
  • I’m experimenting: If I’m trying a new idea, I don’t focus on code quality (again, this is a prototype).
  • I’m evaluating implementation options: There are some cases where you want to compare two implementation options, so focus on having a good-enough-to-decide prototype and throw it away after you decide…then do the stuff well.
  • I don’t care about code quality: I know code quality is not usually negotiable, but in very limited and extreme situations, it may not be the top priority. For example, when everything is screwed up on prod and you need to fix the problem ASAP because the company is losing millions of dollars per minute. In this very extreme circumstance, fix the problem as fast as you can, make your company earn money again, go to sleep (or maybe get a drink) and tomorrow at 10 am (yup, after a stressful night, start working a little later the next day) make the code beautiful with TDD. Make a test that reproduces the bug and then fix and refactor the code properly.

 

 

Me again, but on one of THOSE nights.

  • When creating test code is extremely difficult (but not possible): In Salesforce there are a few elements that are very hard to test, like working with CMT. In this scenario, I’d probably split the problem into two parts – one that is TDD-doable using mocking data (@TestVisible is your best friend here) and a second, smaller part that I’d consider how to test later (if I even consider it).

How I do TDD in Salesforce

I really don’t do TDD as I defined at the beginning of this article when I’m working in Salesforce. Why? Mainly because of the slower compile/test flow, but also because in Apex we generally start writing integration tests instead of unit tests. Instead of “regular” TDD, I tweaked the formula a bit to work better under Salesforce circumstances.

  1. Write an entire deployable test that checks the flow or use case. Yup, I said deployable, so if I called a method I haven’t created yet, I will create it, empty, so I can deploy.
  2. Run it and get a failure.
  3. Write the minimum code that makes that test pass.
  4. Refactor.
  5. Continue with the next flow or use case.
  6. When I’m done with all the flows and use cases, I refactor the code again (splitting methods, checking code cleanliness, documentation). I run the unit test continuously, every few changes to check if everything continues to work as expected.

To make everything clear, let’s view an <could-be-real-world> example.

Requirement:
As a user, I want the values stored in any object copied into a number of specified contact fields. The specified “mappings” will be stored in a CustomMetadataType called Contact_Mappings__cmt. The Contact_Mappings_cmt has two fields:

  • Original_Fields__c Text
  • Mapped_Fields__c Text

Round 1

As I said before, I should start writing an Apex test that tests a business case. The first thing I’m thinking of developing is “The contact should not change if there is no mapping defined.” I have to write a deployable test that is going to fail with the minimum amount of code to make it fail:

We work in split view

As expected, the code deploys but the test fails. So, we need to fix it! We can simply return the same object.

Now It passes, but we don’t have a lot of code to refactor (we could extract some constants in the test).

This is a much better test.

Test still passes!

Round 2

Okay, let’s add another case. What if we check that the User.LastName is copied into the contact when I define the Mapping Lastname => Lastname? Great idea, let’s do it!

I start to write the unit test but…. I realize I can’t do an Insert in a CMT. Or, I give seeAllData permission to the test and define it in the project metadata. Or, I have to somehow deploy it. 

Remember that I said that I don’t do TDD when writing the test is extremely hard? Well, it looks like I’m in one of those situations. At this moment, I can quit writing this blog post and go cry…or I can redefine what I am developing with TDD, leaving all the complexities outside of scope. I imagine you would be very annoyed after reading this far to see me just quit, so let’s go with the second option.

I can’t use the CMT right now, so let’s do something different. What if we use a Map<String, String> where the key is the field in the original object and the value is the list of fields names in the Contact Object. It might work, later on we just need to read the CMT and create a Map with that information, but spoiler alert…that won’t be covered in this article.

But okay, great, let’s create a map and write the deployable failing test.

And as it was expected… it fails.

Let’s write the “minimum” code that makes that test pass

Our new test passes, but the other one failed! Let’s fix that.

Let’s do some refactoring, either in test or production code.

I think the put/get part is messy to read (and has its own meaning), so let’s split it into its own method.

Also, as we want that theMap could be injected into test case scenarios, the @TestVisible annotation is useful here.

Round 3

Now we should add a new test that executes a new flow and see it fail. I think you got the idea, so I won’t do it now, but just to specify the cases, I can think:

  • Mapping a field to multiple fields (separated by colon)
  • Does nothing if origin field is wrong
  • Does nothing if destination field is wrong
  • Does nothing if types are not compatible
    …and so on

Can we do TDD in Lightning web components (or front-end)

The short answer is yes, we can.

Long answer: As the Jest test can’t see the objects, but they see only the “generated DOM”, it may be harder to do TDD in an efficient way for front-end solutions. Usually, it is better to test visual code by watching the result and THEN write the tests we need to ensure code won’t break in the future.

Conclusion

TDD is a best practice that’s good to master so that you can decide the best moment to apply it (I don’t believe there is One Ring to rule them all, One Ring to find them, One Ring to bring them all, and in the darkness bind them, thank you J.R.R. Tolkien). Applied correctly it will make you produce better, more robust code…and fewer bugs, which means…

Homer is happy