Skip to the main content.

Curiosity Modeller

Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality! 

Product Overview Solutions
Success Stories Integrations
Book a Demo Release Notes
Free Trial Brochure
Pricing  

Enterprise Test Data

Stream Complete and Compliant Test Data On-Demand, Removing Bottlenecks and Boosting Coverage!

Explore Curiosity's Solutions

Our innovative solutions help you deliver quality software earlier, and at less cost!

robot-excited copy-1              AI Accelerated Quality              Scalable AI accelerated test creation for improved quality and faster software delivery.

palette copy-1                      Test Case Design                Generate the smallest set of test cases needed to test complex systems.

database-arrow-right copy-3          Data Subsetting & Cloning      Extract the smallest data sets needed for referential integrity and coverage.

cloud-cog copy                  API Test Automation              Make complex API testing simple, using a visual approach to generate rigorous API tests.

plus-box-multiple copy-1         Synthetic Data Generation             Generate complete and compliant synthetic data on-demand for every scenario.

file-find copy-1                                     Data Allocation                  Automatically find and make data for every possible test, testing continuously and in parallel.

sitemap copy-1                Requirements Modelling          Model complex systems and requirements as complete flowcharts in-sprint.

lock copy-1                                 Data Masking                            Identify and mask sensitive information across databases and files.

database-sync copy-2                   Legacy TDM Replacement        Move to a modern test data solution with cutting-edge capabilities.

Explore Curiosity's Resources

See how we empower customer success, watch our latest webinars, read our newest eBooks and more.

video-vintage copy                                      Webinars                                Register for upcoming events, and watch our latest on-demand webinars.

radio copy                                   Podcasts                                  Listen to the latest episode of the Why Didn't You Test That? Podcast and more.

notebook copy                                           eBooks                                Download our latest research papers and solutions briefs.

calendar copy                                       Events                                          Join the Curiosity team in person or virtually at our upcoming events and conferences.

book-open-page-variant copy                                          Blog                                        Discover software quality trends and thought leadership brought to you by the Curiosity team.

face-agent copy                               Help & Support                            Find a solution, request expert support and contact Curiosity. 

bookmark-check copy                            Success Stories                            Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

file-document-multiple (1) copy                                 Documentation                            Get started with the Curiosity Platform, discover our learning portal and find solutions. 

connection copy                                  Integrations                              Explore Modeller's wide range of connections and integrations.

Better Software, Faster Delivery!

Curiosity are your partners for designing and building complex systems in short sprints!

account-supervisor copy                            Meet Our Team                          Meet our team of world leading experts in software quality and test data.

calendar-month copy                                         Our History                                Explore Curiosity's long history of creating market-defining solutions and success.

check-decagram copy                                       Our Mission                                Discover how we aim to revolutionize the quality and speed of software delivery.

handshake copy                            Our Partners                            Learn about our partners and how we can help you solve your software delivery challenges.

account-tie-woman copy                                        Careers                                    Join our growing team of industry veterans, experts, innovators and specialists. 

typewriter copy                             Press Releases                          Read the latest Curiosity news and company updates.

bookmark-check copy                            Success Stories                          Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

book-open-page-variant copy                                                  Blog                                                Discover software quality trends and thought leadership brought to you by the Curiosity team.

phone-classic copy                                      Contact Us                                           Get in touch with a Curiosity expert or leave us a message.

6 min read

Introducing “Functional Performance Testing” Part 2

Introducing “Functional Performance Testing” Part 2

This is Part 2/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier architecture, and across the testing pyramid.

 

INTRODUCING “FUNCTIONAL PERFORMANCE TESTING” PART 2: COMPLETE TEST CASE DESIGN

 

Complete test case design: generate functional tests and data that fully “cover” multi-tier systems

This is Part 2/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier architecture, and across the testing pyramid. Read Part One here, or download the whole series as a paper.

Part one of this series set out the reasons why effective Load testing must choose from a vast number of possible tests. Unable to execute every test in short iterations, it concluded that rigorous Load testing must instead aim to test every logically distinct scenario that might be exercised in production.

Fortunately, the need to identify an executable number of tests that cover a large number of complex scenarios is not new. Model-Based testing is all about mapping the routes through a system’s logic, generating tests to “cover” every logically distinct positive and negative path.

These principles can be applied to performance testing, overcoming the complexity just described in Part One, and many of the challenges associated with Load testing across multi-tier architecture. This in turn leads to a new realm of testing, with tests that account for both functionality and performance factors. This introduces “Functional Performance Testing”.

Modelling complex systems rapidly

“Functional Performance Testing” also begins with a logical model of the paths that can be taken to various end-points in a system’s logic. The models might map users’ routes through a UI, and can also map the actions contained in APIs. The models can in turn be chained together, in order to generate tests that cut across multi-tier architecture.

Modelling maps out the paths that can be taken to each end-point involved in a system or component, with different data variables defined to exercise each logical step. For a UI, these models might map user activity and user-inputted data:Model Based UI Testing

A quick-to-build flowchart model maps the routes a user can take through the UI of a
log-in page, entering valid or combinations of invalid data into a username and password
field. These routes can result in two endpoints: user authentication and login success, or
failed authentication and login failure.

Using Test Modeller, these models are quick to build, and modelling is compatible with short, Agile sprints. A range of importers are available to convert existing tests and system requirements into models, from a range of formats. A UI Recorder can additionally be used to complete the logical models, and captured message data can be imported from tools like Fiddler.

Models similar in style can also be built to map APIs. These models contain the range of actions or methods by which an API might transform user-inputted or machine data:Model Based API Testing

Different items can be entered into a shopping cart on an eCommerce store, leading to
either a valid API call to Add Item, or to an invalid request.

The models of UIs and APIs can then be combined, in order to create models from which to test across multi-tier architecture.

Every model created in Test Modeller is re-usable, becoming subflows that can be dragged-and-dropped onto the canvas. This easy-to-use, visual approach chains the modelled components together to create master flowcharts that test across multi-tier architecture. For instance, subflows used to test a UI can be combined rapidly with subflow that generate API tests.

Testers can therefore combine models rapidly, creating flows that contain a rich set of user activity exercised against UIs, as well as the API calls generated by that activity. Flowcharts that are simple in appearance can contain information far beyond the cognitive capabilities of a human:Subflows

Subflows chain together models of both API calls and the logic contained in UIs.

The assembled flowcharts account for the first three requirements of functional testing listed in Part One. First, the models account for the logical journeys that a user can take through a system, inputting data along the way. As shown in the above example, A UI might be modelled, defining the combination of fields into which users can enter valid and invalid data.

Second, the same models can include the machine data that can be generated by production activity, split by equivalence classes. Thirdly, the range of actions or methods involved in an API call can also be modelled, setting out how an API might exercise user-inputted or machine data. All of this information is in turn reflected in the tests generated from these models, creating tests that satisfy the first three criteria listed in Part One.

A significant advantage of this drag-and-drop approach is the ease with which complex chains of APIs can be tested, helping with the fourth criterion listed in Part One. The logic and data involved in each individual API or UI screen only needs to be modelled once, and can then be connected via their start and end-points. Mathematical algorithms will then identify the vast range of combinations involved, resolving the complexity of test case design for multi-tier architecture.

The chained-up models at this point represent the routes that can be taken through a system’s multi-tier architecture, arriving at a range of end-points. This is everything needed to generate a set of optimized test automatically using The VIP Test Modeller. However, automated testing additionally requires test data which to exercise the routes through the system.

Defining dynamic test for every possible test

With The VIP Test Modeller, test data variables are defined for each relevant node of the model, ready to be combined into logically distinct tests. This creates the range of data combinations that could be exercised against a system in production, both UI-inputted and machine data:Test Data Definition

Data variables are specified to define the data variables that a user could create in their
interactions with a system.

The data can in turn be rolled up into parameterised Load tests, injecting the data via messages executed against an API.

Where it gets smart is defining the test data values dynamically for each variable. This in turn creates diverse, high volume test data that resolves “just in time” during test execution. Realistic test data is thereby made available on demand, generating the data needed to test the range of functional logic involved in a complex system, while also testing it at a variety of Workloads.

The VIP Test Modeller enables you to define synthetic data functions for every node in your functional model. There are over 500 functions that resolve dynamically during test execution, all of which can be combined using a simple, visual functional editor:Dynamic Test Data Definition

Dynamically defining data to test a username field in a UI: The Data Editor provides over
500 combinable, dynamic data generation functions.

This creates a diverse variety of data, reflecting accurately the real-world data that could be inputted into a system. The data covers information entered through UIs, as well as machine data. It can all be rolled up into messages to fire-off during testing, a process described later in this in Part Three of this series.

Automated and optimised test case design

Having designed the routes through the multi-tier architecture, the full range of distinct tests and data needed to reach each end-point in the model can be generated. This test case design is automated and optimized, by virtue of the flowchart models being formal, directed graphs.

The VIP Test Modeller uses mathematical algorithms to tackle the challenge of massive complexity, creating the minimum number of test cases which maximise test coverage. Tests are equivalent to paths through the flowchart, and the coverage algorithms create the smallest set of test cases needed to exercise every distinct path through the model, just as a car GPS can identify different routes through a city map:Automated Test Case Design

Automated test case design: mathematical algorithms generate the smallest set of paths
needed to exercise every logically distinct combination. This includes both user and
machine activity, with tests “covering” both the UI and API.

The requirements for multi-tier testing fulfilled

This produces a set of test cases with associated data that will exercise the full range of distinct data scenarios that might occur in production, both user inputted and machine data. The tests additionally include the variety of distinct calls and data associated with any one API, as well as the logically distinct routes that can be taken through a UI.

They can moreover be chained together using a drag-and-drop approach, creating a set of test cases that can be executed within an iteration, but which nonetheless cover all the distinct logic in complex chains of API calls.

In other words, the automatically generated tests fulfil all four criteria identified above for testing across APIs and UIs. They cover:

  1. The full range of values that a user can input during production, both valid and negative.

  2. The full range of machine data that could be generated by users in production, via UIs or APIs. This includes content-type, session IDs, authentication-headers, user-agents, and more.

  3. The full range of methods or actions that API Calls exercise on the data.

  4. The combinations of all of the above, joined together into chains of API calls.

The article has focused so far on achieving testing rigour when faced with the complexity of multi-tier architecture. However, it is worth also noting some significant time-gains of this approach, that allows rigorous testing to occur in-sprint:

  • Automated test case design from models is generally far quicker than repetitiously and manually defining repeated test steps for a large number of test cases.

  • “Just in time” data resolution avoids the bottleneck and compliance risk associated with using production data, replacing slow data refreshes and cross-team constraints.

  • Expected results are created at the same time as test cases, avoiding the manual creation of hard-to-define responses.

  • Test maintenance is significantly accelerated, avoiding arguably the greatest automated testing bottleneck. The model is the source of truth and living documentation for the system. If a component changes, only the model for that component needs to be updated. QA teams can then quickly regenerate up-to-date test assets for every master flow in which that component features. This is far faster and more reliable than having to check and update test cases, test scripts, and data by hand. The time and reliability gains are particular significant for complex, multi-tier systems, where identifying every impact of a change across a myriad of interrelated components is highly complex. With manual maintenance, many impacts of a change go unnoticed, leaving much of the affected system untested. Tests cannot be updated quickly enough either, leading to a piling up of invalid tests that throw up automated test errors. By contrast, dependency mapping using flowchart models avoids this bottleneck: simply updating one subflow will reflect the change made to one component across every flowchart in which that component features.

The next section of this series turns to how these rigorous test cases can be executed automatically, both for functional and performance testing. This will cover the additional criterion listed in Part One for Load testing across multi-tier architecture: tests must include the diverse parameters needed to simulate the range of Load and stress that a system might be subjected to in production.

[Image: Pixabay]

Introducing “Functional Performance Testing” Part 1

Introducing “Functional Performance Testing” Part 1

This is Part 1/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier...

Read More
26 billion reasons to automate Oracle FLEXCUBE testing

26 billion reasons to automate Oracle FLEXCUBE testing

Each year, organisations and consumers globally depend on Oracle FLEXCUBE to process an estimated 26 Billion banking transactions [1]. For...

Read More
Test Automation: The Myth, The Reality

Test Automation: The Myth, The Reality

Test teams today are striving to automate more in order to test ever-more complex systems within ever-shorter iterations. However, the rate of test...

Read More
Automate Oracle FLEXCUBE testing

Automate Oracle FLEXCUBE testing

Banks globally rely on Oracle FLEXCUBE to provide their agile banking infrastructure, and more today are migrating to FLEXCUBE to retain a...

Read More
5 Barriers to Successful Test Automation

5 Barriers to Successful Test Automation

Organisations today have long understood the need to automate test execution, and 90% believe that automated testing allows testers to perform their...

Read More
Automate more, faster, with Test Modeller – TMX Integration

Automate more, faster, with Test Modeller – TMX Integration

The QA community has been speaking about functional test automation for a long time now, but automated test execution rates remain too low. A major...

Read More
Introducing “Functional Performance Testing” Part 3

Introducing “Functional Performance Testing” Part 3

This is Part 3/3 of “Introducing “Functional Performance Testing”, a series of articles considering how to test automatically across multi-tier...

Read More
Using Model-Based Testing to Generate Rigorous Automated Tests

Using Model-Based Testing to Generate Rigorous Automated Tests

Despite increasing investment in test automation, many organisations today are yet to overcome the barrier to successful automated testing. In fact,...

Read More
Automated Test Case Design is Key for CI/CD

Automated Test Case Design is Key for CI/CD

Continuous Integration (CI) and Continuous Delivery or Continuous Deployment (CD) pipelines have been largely adopted across the software development...

Read More