Ujjwal Kumar Singh
SDET @ Skeps
He/Him
I am Open to Speak, Write, Podcasting, Teach
Hi, I’m Ujjwal, a software tester and quality advocate. Exploring how quality works beyond tools and into systems, decisions, and trade-offs.
Substack: https://substack.com/@beinghumantester
Achievements
Certificates
Awarded for:
Achieving 5 or more Community Star badges
Activity
earned:
You're Not Ready For Quality Engineering with Callum Akehurst-Ryan
earned:
Traceability
contributed:
Definitions of Traceability
earned:
Load Testing
contributed:
Definitions of Load Testing
Contributions
Traceability in software testing is the ability to connect and follow relationships between requirements, risks, test cases, defects, and code changes across the software lifecycle. It helps teams reason systematically about failures, gaps, and change, while providing visibility into what has been tested, what may be affected, and where uncertainty still exists. Traceability operates in multiple directions. Forward traceability follows requirements through implementation and testing to ensure expected behaviours are covered. Backward traceability links defects or test failures back to their originating requirements or risks. Horizontal traceability connects related artefacts at the same level, such as overlapping coverage across features, workflows, or services. Effective traceability requires all three perspectives, not just the vertical mapping many teams default to. A traceability link confirms that a test exists for a requirement not that the test is meaningful, well-designed, or passing. Traceability provides visibility into coverage relationships, but it does not guarantee coverage quality. The two should not be conflated. Connecting tests to explicit risks supports prioritisation, regression selection, release exposure assessment, and failure triage. Without risk linkage, traceability can incorrectly imply that all requirements carry equal importance, which rarely reflects the reality of systems under pressure. Traceability also strengthens impact analysis. When requirements change or defects are discovered, traceability links help teams identify which tests need review, which areas carry regression risk, and where coverage gaps may have been introduced. This becomes particularly valuable in large or distributed systems where the consequences of a change are not immediately obvious. Used well, traceability is a decision-making tool that helps teams test smarter, communicate coverage clearly, and respond to change with confidence rather than guesswork.
Load testing is a type of performance testing that evaluates how a system behaves under a defined workload, typically modelled on expected or peak production usage. Its primary goal is to determine whether the system can respond reliably, consistently, and within pre-defined performance thresholds when multiple users interact with it concurrently. Those thresholds expressed as Service Level Objectives or explicit acceptance criteria, must be agreed before the test runs. Without them, a load test produces observations, not pass or fail signals.
Unlike stress testing, which pushes a system beyond its limits to identify breaking points, load testing operates within anticipated demand boundaries. It answers a focused question: does the system perform acceptably under the conditions it is expected to handle?
A load test is only as meaningful as its workload model. This includes not just concurrent users or requests per second, but also the shape of the load: whether traffic ramps gradually, remains steady, or follows realistic production patterns such as a diurnal curve. Representative user journeys, think times, and session durations all matter. Wherever possible, the workload model should be derived from production traffic data or analytics rather than assumptions, this is one of the most consequential decisions in the entire exercise.
Load testing measures response time, throughput, error rate, and resource utilisation across application, database, and infrastructure layers. It helps identify bottlenecks, slow dependencies, memory pressure, and scalability limitations. However, load tests reveal symptoms, not root causes. Diagnosing failures requires supporting telemetry metrics, distributed traces, logs, and profiling data collected during execution.
The reliability of results also depends heavily on environment fidelity. If the test environment differs significantly from production in topology, configuration, data volume, or external dependencies, the conclusions may be misleading regardless of how well the test itself was designed.
Effective load testing examines how performance degrades as demand increases, how the system recovers when load subsides, and whether user experience remains within acceptable bounds throughout. These behaviours matter as much as whether the system survives the load itself. Where tests involve sustained load over extended periods to observe memory growth or gradual degradation, this moves into the adjacent practice of soak testing, which carries its own distinct objectives.
Load testing sits at the intersection of development, operations, and architecture. In organisations where ownership is unclear, it tends to be performed poorly or not at all. Treating it as a shared responsibilityoften under a performance engineering or site reliability function,, significantly improves both the quality of the tests and the organisational response to findings.
In mature engineering organisations, load testing is not a one-time pre-release activity. It becomes part of continuous delivery pipelines and ongoing system health monitoring as systems change and user behaviour shifts.
cURL is a command-line tool for sending HTTP requests and inspecting raw responses. Testers use it to interact directly with API endpoints, bypassing UI and client-side layers that can obscure where a problem originates. It supports common HTTP methods (GET, POST, PUT, DELETE, PATCH), custom headers, request payloads, authentication schemes (Basic Auth, Bearer tokens, API keys), and TLS options, including testing against self-signed certificates using --insecure in non-production environments. From a testing perspective, cURL is valuable for reproducing issues with precision, validating API contracts, and testing authentication and authorisation flows in isolation. It also allows testers to assert specific response codes, headers, and payloads without additional tooling. Because it is scriptable, cURL can be embedded in shell scripts, CI pipelines, and lightweight smoke test suites, making it useful across both exploratory and automated testing. Unlike GUI-based tools such as Postman, cURL’s strength lies in its portability. It is available on most platforms, requires little or no setup, and produces commands that can be shared, reproduced in any terminal, and version-controlled alongside test assets.
The Y2K38 problem is a software defect that affects systems using a signed 32-bit integer to store Unix timestamps. On 19 January 2038 at 03:14:07 UTC, the timestamp value will exceed its maximum limit (2,147,483,647), causing an overflow. The value wraps to a large negative number, which systems interpret as a date in 1901.
Unlike the Year 2000 problem, which was largely a display and parsing issue, Y2K38 is a binary storage limitation. It can impact multiple layers — operating systems, middleware, databases, and applications — wherever a 32-bit time_t is used.
Affected systems may silently produce incorrect timestamps, miscalculate durations, expire sessions or certificates incorrectly, or trigger scheduled jobs at the wrong time. The risk is not limited to 2038; any system storing or validating dates beyond this boundary may already behave incorrectly today. For example, a session token issued today with a 15-year expiry already crosses the 2038 boundary and may fail on systems that have not yet migrated.
From a testing perspective, Y2K38 highlights the importance of boundary testing with future dates, auditing dependencies for 32-bit time usage, and using clock-mocking to simulate post-2038 scenarios. Remediation typically involves migrating to 64-bit time representations and validating time-dependent behaviour across all integrated components.
A Schrödinbug is a bug that works purely by accident until someone examines or modifies the code. The software behaves correctly even though the logic is wrong, usually because of lucky conditions like memory being zeroed or timing working out just right.The moment someone refactors the code, adds logs, or changes compiler settings, the behavior breaks. The problem was always there, just hidden, nothing new was introduced. These bugs rely on undefined behavior, which makes them fragile and unpredictable.They are dangerous because the code looks stable and builds false confidence. Apparent correctness hides serious flaws that can surface after a compiler update, platform change, or even a small modification. If something works only because of luck, it’s already broken, it just hasn’t failed yet.
A hiccup error is a brief, self-resolving failure that appears once and then disappears. It usually happens because of momentary instability, a network glitch, a timing issue, a short resource spike, or a system running close to its limits. When the same action is retried, it succeeds, which makes the issue easy to ignore.For example, a payment API returns a 503 error and then works on the next attempt. The transaction completes, so no one investigates. Or a database query times out during a CPU spike, but the retry finishes in milliseconds. These errors leave little trace and rarely point to a clear code defect, they expose fragile interactions between systems.Hiccup errors are dangerous because teams treat them as flukes until they become frequent. A weekly hiccup turns daily, then hourly. By the time it's taken seriously, the system is already degraded. What looks like noise at first is often an early warning of a deeper reliability problem.