Softrams is deeply entrenched in work at CMS with a variety of prime contract programs. These CMS programs support significant numbers of users both external and internal to CMS. The web-enabled interfaces of these applications are extensive and some of our most significant achievements have been in the design and redesign of workflows in extensive collaboration with users and stakeholders. In these efforts to design human centered software we focus on the Human Experience – the needs, context, behaviors, and emotions of the people that the solutions will serve. Human Centered Design (HCD) is a primary focus for Softrams, as is innovation with artificial intelligence (AI) and machine learning (ML).

AI Opportunities and Impact

Organizational workforces continue to spend considerable time on administrative and internal facing activities – even after the implementation of IT systems that fulfill many data processing and related tasks. Some of this relates to the need for human intervention with unstructured data – information that AI approaches can now tackle. In all, AI is enabling organizations like CMS to refocus people resources even further on higher level strategic work and external facing consultative work with constituents and customers.

AI brings new capabilities to doing new things and to doing things differently. And, of course, this is not without impact. As AI is implemented, this will change business processes and affect the workforce, users, contractors, stakeholders, and the public as many other IT implementations have done so before. And in the context of these new business processes, humans are now interacting with a different type of system than before. So, it’s useful to explore the implications for human centered design work.

AI may be applied with varying degrees of AI support. One approach is decision support – where AI provides information and recommendations that support human performance. This is augmentation. AI can alternatively selectively filter what requires manual intervention – bypassing or completing some tasks but prioritizing other tasks for human action and decision. Finally, of course, some tasks may be nearly fully automated.

AI Helping Humans & Humans Helping AI

It’s true, that over the next decade, more work will be assigned to machines. However, in nearly all assessments, human-machine collaboration will still be required to effectively use AI. That is, AI augments humans. The fact is that there isn’t expected yet to be full AI replacement of human roles in business processes even with structured repetitive activities like filing documents. And the AI-human collaboration will be more significant for quantitative reasoning skills such as interpreting language, performing analytics, and software programming. And of course, cross-functional reasoning skills such as developing strategy and managing people are expected to be AI assisted in only limited ways.

The human-machine collaboration is bi-directional in that there is often a process of co-learning and co-adaptation. When people interact with AI systems, they can influence what the system will produce in the future. These systems that evolve over time alongside their users are often the most helpful. But as they change too, this also affects how the users interact with them. Human user roles lie in training the AI further, monitoring the AI’s performance, and helping to put the AI’s recommendations into a business context. In other words, I may have an AI at my disposal, but my AI has a human too. How do we best support users in these roles? For both sides of the coin – we have much work to do gain further experience on how these processes will operate, the roles and responsibilities, and the best designs for supporting both users and machines in these roles. And this is what human centered design research can address for us.

Human Centered Machine Learning

Let’s look at some of the areas in which Human Centered Machine Learning (HCML) must focus:

  • Understanding Bi-directional Adaption: This has implications for human centered design. There is possibly a need to go beyond personas and examine the needs and interactions of both human and non-human actors. Some have suggested adopting Actor-Network Theory approaches for example.
  • Designing for Trust: We rely on much to judge our trust of human experts – but not all of that is reliable. What information do we use and biases do we have in basing trust in an AI?
  • Communicating AI Limitations: A top design challenge is communicating and teaching users that machines are limited and may be wrong. What interface designs help users to recognize the limitations of the AI? For example, displaying the level of confidence the AI has in its recommendation. It’s very important to weigh the costs of both false positives and false negatives and communicate them.
  • Addressing Explainable AI (XAI): Deep learning models are largely a black box and questions of why recommendations are offered by the AI cannot be readily made apparent. Some common methods like Local Interpretable Model-Agnostic Explanations (LIME) leverage explainable machine learning models to simulate why an AI provided a particular answer. This makes it possible to design results displays as we have done to date for logistic regression models and other analytics.
  • Working Around the Unknown: Prototyping AI systems can be difficult with the considerable training investments to be made to develop the solution. Some reports suggest that “Wizard of OZ” methods are seeing a resurgence after many years. By this approach, humans act behind the scenes to simulate the AI in the prototyping exercises.

Designing With Subject Matter Expertise

It’s clear that significant technical expertise is required for managing AI development. But it ia equally apparent, as with other analytics, that this development needs to be supported by understanding of the business domain areas and the users. This is the age-old problem in developing IT.  It’s the product management balance of building the right product for business value and building the product right for usability.

Machine learning and AI applications still most often fail because user design is not done well. And business value requires expertise to achieve — model building holds risk in the lack of control of all factors. As an example, consider the data scientist who detects suppliers billing anomalous amounts of medical oxygen. However, the cases identified are exceptions. They are due to where the patients are living – in the mountains with high altitudes and thin air. Not necessarily an obvious finding – but unlikely to be revealed with the data scientist working alone. As we transform static business intelligence displays to interactive AI systems, the key ingredient will be training the AI to recognize business significance – when trends and events are meaningful.

Final Thoughts: Managing AI Change

Ultimately our key decisions are human ones that an AI cannot necessarily answer:

  • What are the business problems we need to solve? We need to use real intelligence here, not the artificial stuff.
  • How can I get a solution to my business problem most cost effectively? In many cases, we might ask whether an AI model is even necessary.

When AI is the right solution, user engagement from inception is critical. This helps to ensure that the users’ functional and emotional goals are met. In the end, it supports user acceptance of the AI and its recommendations.

Clearly internal users of business systems will have concerns about the impact of AI on their roles and jobs. It’s important not to take the users away from being actors in the process, and project them into the role of observers by focusing on the benefits that AI offers to them. Change management communications should be effective with users when the focus is on the organizational objectives for the business in using AI. Done correctly this moves towards engaging users in support of how that will happen.

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD

TL;DR: User experience is too important to be left to designers. User journey testing that helps to build those great experiences is too important to be left to testers or developers. Read on to see why it must be a team sport and how you can enable your entire team to be part of it.

User Journey, simply put, is (a) path user may take to accomplish a goal. At times, this may involve just a single application like a website, web application or a mobile application. Other times, this may involve multiple applications, accessed in certain sequence. Even in cases where a single application is presented to users, that single application typically assembles all those interactions behind the scenes.

So it is imperative for a team that is striving to build a great, seamless experience to understand the ‘entire’ user journey and build solutions with users at the front and center. This is different from a systems based approach, where the focus is on building and optimizing systems to support and deliver capabilities.

At Softrams, we call the design approach Human Experience (HX) Design, to build engaging and empowering digital services. Our overall approach can be simplified with the following four ideas:

  • Put users at the front and center
  • Design and build with users, not just for users
  • Leverage technology to empower users
  • Build iteratively and learn more about users at every iteration

User Journey tests; tests that mimics and validates a path user may take to accomplish a goal, are hence critical, particularly in this modern world of micro-services and micro-frontends where the final end user experience is assembled from varied sources and backend systems. For almost all non-trivial applications, this end user experience is built by multiple teams.

User journey testing should not be confused with ‘Usability Testing‘. Usability testing is done by real users of the product/service as part of Human Experience Design process to evaluate if solution is indeed useful and usable. Real users are typically asked to complete tasks and/or accomplish certain goals as our teams watch and learn.

While slightly different with respect to focus and objectives, these tests are also referred to as end-to-end tests, acceptance tests etc. In some organizations, there are dedicated teams to perform these tests. Some teams (often referred to as QA teams) perform these tests manually, clicking through as a typical user interacts with a web application for example. Some teams hire ‘Automation Engineers’ to build ‘browser automation’ test suites so that these tests can be run in an automated fashion. Some organizations tend to push this responsibility to developers that build those very applications to build these automated end to end tests as part of development itself and eliminate dedicated QA or Automation teams. Some organizations use a hybrid approach in that continuum.

As building great user experiences is a team sport, we believe that this important aspect of User Journey tests cannot be left to developers, QA, or automation engineers alone, but this must be a team sport as well, where every member of the team must be engaged and be able to contribute.

This requires us to build that culture in the team to begin with and use tools and processes that nurture this culture.

One important aspect of this approach is to use tools that are accessible to everybody and enable everybody in the team, those with programming background and those without programming background. Traditionally any testing or automation tools are left to developers or automation engineers as it requires writing code to build these automation tests.

We evaluated multiple tools and finally chose Gauge framework about two years ago and have been able to successfully implement it across the organization. We also built some supporting tools to enable our teams to successfully adapt the framework in various products.

Gauge + Taiko Framework

Gauge with Taiko driver offers significant benefits to building user journey tests. You may learn more at, but here are key benefits that made a big impact to our teams.

  • Tests are written in plain english using Markdown so anybody in the team can contribute
  • Allows creating easy, readable, and maintainable tests by anybody in the team
  • Great interactive report and documentation for test runs that include a screenshot for all failures
  • Ease and flexibility to organize test cases in multiple ways to run a sub-set of test cases etc.

Fully configured browser based environments

We have also created a fully browser based environment for non-programmers to easily access test projects and environments to review, contribute, and run tests (without having to install and setup locally). This provides a docker container based environment with all test tooling setup and opens VS Code inside browser. You may provision and run workspaces using these containers to offer a fully automated browser based test environments.

Check out for more.

Softrams Automation Toolset

Gauge Taiko Steps to cover most common scenarios

Gauge Taiko Steps is an open sourced and free to use implementation of most of the common actions a typical user will rely on to interact with a web application. This repository implements common gauge steps for Taiko API, so that tests can be created in plain language without having to programmatically implement steps for most common scenarios. This means, anybody in the team can write fully executable and verifiable test specifications, in plain language, without any additional programming or development needed.

Check out for more

Softrams Gauge Taiko Steps

Sample Test Scenario

I would like to show a very simple test scenario to demonstrate the readability of test case, which is still a fully executable and automated test case. Each line that begins with * is an executable test step. You can see the plain language used in each step (compared with typical automation test that only developers can understand).

# Navigation User Journey Test
This is the place you explain the key user journey scenario in plain text or Markdown.

You may additionally **add more notes** about this test scenario. 

In this sample, we will visit website, one of the most widely accssed health information website and search for Covid information.

## Open the website and search for Covid

Visit website
* Goto ""

Verify that page is correctly loaded by looking for specific text on the page. 
The presence of this text "About CDC" confirms that page has been loaded fully.
* Check text "About CDC" exists

Now the page has been fully loaded, go ahead and run Accessibility check to see 
if there are any accessibility issues
* Audit page for accessibility

Let us test how the search functionality work on the page, by searching for "Covid" in Search box and
press Enter key on keyboard. 
* Write "Covid" into "Search"
* Press "Enter"

Once search page is fully displayed, let us go ahead and verify if there are any accessibility issues on the page
> Note the flexibility in the framework to evaluate Accessibility at each user interaction on each page. 

* Audit page for accessibility

Go to Videos tab and look for a specific video. Again, once the page is loaded audit for accessibility
* Click "Videos"
* Check text "Know Your Risk of Getting COVID-19" exists
* Audit page for accessibility

Once the test is run, here is a screenshot of the test report, again to demonstrate the ease and readability of the test case, by anybody in the team.

Screenshot showing a test run
Sample Test Case

Since the adoption about two years ago, Softrams teams have built 1000s of test cases, with contributions from every member on the team across many programs.

If you like what you see, checkout the getting started with the framework post.

We are eager to know how others are approaching to build user journey tests. Let us know your thoughts and ideas. Also, do try to use the tools or frameworks shared in this post and provide your feedback.

We are honored to have these awards

Tech Industry
Disruptive Tech Fast 50 Award
Inc 5000 Award
75 WD