Skip to the main content.

Curiosity Modeller

Design Complex Systems, Create Visual Models, Collaborate on Requirements, Eradicate Bugs and Deliver Quality! 

Product Overview Solutions
Success Stories Integrations
Book a Demo Release Notes
Free Trial Brochure
Pricing  

Enterprise Test Data

Stream Complete and Compliant Test Data On-Demand, Removing Bottlenecks and Boosting Coverage!

Explore Curiosity's Solutions

Our innovative solutions help you deliver quality software earlier, and at less cost!

robot-excited copy-1              AI Accelerated Quality              Scalable AI accelerated test creation for improved quality and faster software delivery.

palette copy-1                      Test Case Design                Generate the smallest set of test cases needed to test complex systems.

database-arrow-right copy-3          Data Subsetting & Cloning      Extract the smallest data sets needed for referential integrity and coverage.

cloud-cog copy                  API Test Automation              Make complex API testing simple, using a visual approach to generate rigorous API tests.

plus-box-multiple copy-1         Synthetic Data Generation             Generate complete and compliant synthetic data on-demand for every scenario.

file-find copy-1                                     Data Allocation                  Automatically find and make data for every possible test, testing continuously and in parallel.

sitemap copy-1                Requirements Modelling          Model complex systems and requirements as complete flowcharts in-sprint.

lock copy-1                                 Data Masking                            Identify and mask sensitive information across databases and files.

database-sync copy-2                   Legacy TDM Replacement        Move to a modern test data solution with cutting-edge capabilities.

Explore Curiosity's Resources

See how we empower customer success, watch our latest webinars, read our newest eBooks and more.

video-vintage copy                                      Webinars                                Register for upcoming events, and watch our latest on-demand webinars.

radio copy                                   Podcasts                                  Listen to the latest episode of the Why Didn't You Test That? Podcast and more.

notebook copy                                           eBooks                                Download our latest research papers and solutions briefs.

calendar copy                                       Events                                          Join the Curiosity team in person or virtually at our upcoming events and conferences.

book-open-page-variant copy                                          Blog                                        Discover software quality trends and thought leadership brought to you by the Curiosity team.

face-agent copy                               Help & Support                            Find a solution, request expert support and contact Curiosity. 

bookmark-check copy                            Success Stories                            Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

file-document-multiple (1) copy                                 Documentation                            Get started with the Curiosity Platform, discover our learning portal and find solutions. 

connection copy                                  Integrations                              Explore Modeller's wide range of connections and integrations.

Better Software, Faster Delivery!

Curiosity are your partners for designing and building complex systems in short sprints!

account-supervisor copy                            Meet Our Team                          Meet our team of world leading experts in software quality and test data.

calendar-month copy                                         Our History                                Explore Curiosity's long history of creating market-defining solutions and success.

check-decagram copy                                       Our Mission                                Discover how we aim to revolutionize the quality and speed of software delivery.

handshake copy                            Our Partners                            Learn about our partners and how we can help you solve your software delivery challenges.

account-tie-woman copy                                        Careers                                    Join our growing team of industry veterans, experts, innovators and specialists. 

typewriter copy                             Press Releases                          Read the latest Curiosity news and company updates.

bookmark-check copy                            Success Stories                          Learn how our customers found success with Curiosity's Modeller and Enterprise Test Data.

book-open-page-variant copy                                                  Blog                                                Discover software quality trends and thought leadership brought to you by the Curiosity team.

phone-classic copy                                      Contact Us                                           Get in touch with a Curiosity expert or leave us a message.

4 min read

5 Reasons to Model During QA, Part 4/5: Faster QA Reaction Times

5 Reasons to Model During QA, Part 4/5: Faster QA Reaction Times

Welcome to part 4/5 of 5 Reasons to Model During QA! If you have missed any previous instalments, use the following links to see how modelling can:

  1. Identify bugs during the requirements analysis and design phase, where they require far less time and cost to fix;

  2. Drive up testing efficiency, automating the creation of test cases, test data and automated test scripts;

  3. Maximise test coverage and shorten test cycles, focusing QA on the most critical, high risk functionality.

Model-Based Testing further enables testing to react to fast-changing applications, rapidly updating test suites to validate a change made to the code. This flexibility and resilience is the focus of today’s article, discussing how modelling accurately forecasts the complexity of a change and automates test maintenance.

Working with Change: Flexibility and Resilience

Software applications today are both massively complex and fast-changing. Short iterations bring code commits on a monthly, weekly, or daily basis, and QA must validate the success of each update.

These fast QA reaction times require an approach that is:

  1. Resilient:Testing must be able to maintain test coverage, identifying and creating all test assets needed to validate a change. This must happen in the same iteration as the change was made, otherwise chunks of the code will go untested.

  2. Flexible: QA must be able to maintain test suites at the pace of which applications change, adopting a reactive and flexible stance to test maintenance.

However, most organisations instead face an undesirable choice between QA flexibility and QA resilience. Their current testing practices mean there is not enough time in an iteration to identify, create, and execute every test required to validate a change made to the code.

Test teams cannot therefore fully test a change within the same iteration as the change was made, and code changes in turn risk costly defects in production. Fortunately, there is a way that QA can achieve the resilience and flexibility required to continuously implement change: modelling.

Common Barriers to Continuous Testing

Two broad barriers prevent QA from keeping up with the rate of change: identifying what needs to be tested after a change, and then updating or creating the test assets needed to validate the change.

Identifying what needs (re)testing

A change request today often constitutes a new user story or request that is sent to developers. These requests enters the bag of disparate and unconnected requirements that make up a system. These requirements were discussed in part one of this series.

The disparate requirements are not formally mapped to one another, and there is therefore no automated way to identify which parts of a system have been identified by a new change request. If a new Gherkin Specification is created, for instance, how can BAs, developers and testers reliable assess the impact of one Behaviour-Driven Scenario across the multitude of interrelated parts in a system?Test Maintenance

The challenge of change requests.

Identifying the interdependent parts of a system that have been impacted by a system is often guesswork in this scenario, as is assessing the complexity of a change. Low priority changes can have unforeseen impacts across a complex system, requiring testing and development efforts that are disproportionate to the value of the change.

The responsibility of QA is to identify these problematic and unforeseen consequences of a change. However, testers also lack the ability to reliably identify the impact of a change, by virtue of formal dependency mapping in the requirements.

Slow and manual test maintenance

Then there’s time needed to create or update any tests required to validate a change.

Part two of this series set out the bottlenecks associated with manually creating test cases, test data and automated test scripts. Often, much of this effort is repeated after a change, forcing tests to roll-over constantly to the next iteration.

Manually created test assets are rarely traceable to the system designs, and nor are they typically linked to one another. Testers must therefore analyse existing tests one-by-one to identify the impact of a change on a regression pack. They must then update the test cases, test data and automated tests, keeping all three aligned.

QA teams must additionally create the tests needed to test new functionality, but there is little time in a sprint for both test maintenance and manual test creation.

Alternatively, invalid tests might go unchecked, piling up in the regression pack. These invalid tests will then throw up defects when there are no genuine bugs in the code, while bad test data will destabilise test automation frameworks.

A new approach is instead required, identifying the impact of changes made to a system and reflecting them efficiently in test suites.

Reactive Test Automation

Model-Based Testing, with the right tools and techniques, introduces the flexibility needed to update test assets in time, as well as the QA resilience to continually test with rigour. The impact of changes can further be forecast in advance, enabling evidentially-based software design decisions.

Avoiding time and scope creep

Flowchart modelling first enables you to measure the impact of a change in advance. BAs, developers and testers can rapidly incorporate a change request into the model, or might make the request using the model itself. The paths through the updated model can then be identified automatically using mathematical algorithms.

These paths are equivalent to tests, and the number of paths impacted thereby provides a test-driven measure of complexity. This offers a reliable and standardised way to measure the relative value of a change against the impact of implementing it.

This impact analysis can extend beyond individual components, using subflows to create dependency maps of a system. The subflows group lower level logic in master flows. The impact of a change made to one subprocess is then identifiable in the master flowcharts, as well as downstream in the child models.

Automated test maintenance

Modelling also removes the second challenge of change for QA: manual test maintenance.

Parts two and three of this series discussed how test cases, test data and test scripts are all traceable to the models from which they were generated. As those models change, the optimised test suite is regenerated rapidly. Test teams might further create coverage profiles to target the affected logic with a greater degree of rigour, focusing regression on logic impacted by the last code commit.Test Maintenance

Reactive test automation, driven by central flowchart models.

QA in this approach becomes an automated comparison of the logic specified in the models, to the system reflected in the code. As an organisation’s understanding of the ideal system changes, the models evolve. This auto-updates the rigorous test suite that is tied to those models, continuously testing fast-changing systems.

Regenerating a set of linked test cases, data and scripts is generally far quicker and more reliable than attempting to keep individual assets aligned manually. Modelling therefore provides the QA resilience and QA flexibility needed to deliver fast-changing applications that accurately reflect the latest business requirements.

Join Curiosity and Jim Hazen for “In the beginning there was a model: Using requirements models to drive rigorous test automation”

Watch The webinar

[Image: Pixabay]

 

 

5 Reasons to Model During QA, Part 3/5: Coverage Focuses QA

5 Reasons to Model During QA, Part 3/5: Coverage Focuses QA

Welcome to part 3/5 of 5 Reasons to Model During QA! Part one of this series discussed how modelling enables “shift left” QA, eradicating potentially...

Read More
5 Reasons to Model During QA: “Shift Left” QA Uproots Design Defects

5 Reasons to Model During QA: “Shift Left” QA Uproots Design Defects

Model-Based Testing (MBT) itself is not new, but Model-Based Test Automation is experiencing a resurgence in adoption. Model-Based Testing is the...

Read More
26 billion reasons to automate Oracle FLEXCUBE testing

26 billion reasons to automate Oracle FLEXCUBE testing

Each year, organisations and consumers globally depend on Oracle FLEXCUBE to process an estimated 26 Billion banking transactions [1]. For...

Read More
5 Reasons to Model During QA, Part 5/5

5 Reasons to Model During QA, Part 5/5

Welcome to the final instalment of 5 Reasons to Model During QA! If you have missed any of the previous four articles, jump back in to find out how...

Read More
Assuring Quality at Speed With Automated and Optimised Test Generation

Assuring Quality at Speed With Automated and Optimised Test Generation

Throughout the development process, software applications undergo a variety of changes, from new functionality and code optimisation to the removal...

Read More
How Model-Based Testing Fulfils The promise of AI Testing

How Model-Based Testing Fulfils The promise of AI Testing

There is no longer any doubt in the industry that test automation is beneficial to development; in fact, more than half of development teams have...

Read More
Model-Based Testing Can Lead the Way in IT Change

Model-Based Testing Can Lead the Way in IT Change

IT change remains a persistent struggle for most organisations today. Software teams are aware of the need to move faster and be more agile; yet,...

Read More
Evolving or Devolving? A Deep Dive into AI's Impact on Testing

Evolving or Devolving? A Deep Dive into AI's Impact on Testing

Since the initial launch of ChatGPT, interest in AI has exploded across almost every industry sector. The unique ability to solve problems by...

Read More
10 Common Concerns About Model-Based Testing

10 Common Concerns About Model-Based Testing

We rarely post ‘product’ articles here at Curiosity, preferring instead to draw on our team’s thought and expertise. This article is no different,...

Read More