Wednesday, October 15, 2008

Gregory Brill on Indian Outsourcers

Greogory has some points about Indian Outsourcers! Read on... http://home.infusionblogs.com/gbrill/Lists/Posts/Post.aspx?ID=36

Some points are as below:

  • ...Indian outsourcers instead take junior people, bill them as “senior” and/or have someone other than the senior person they show to their client actually working on the project…hence, that and other failings in their business model results in them taking 8 times longer.
  • With that said, let’s make a CLEAR distinction between Indian outsourcing companies like Wipro and TaTa and people of Indian descent.
  • ...the reason we went directly to India ourselves and invested in starting our own Indian subsidiary instead of partnering with an outsourcer was *precisely* because we believed Indian talent to be on-par with anywhere else. We believed their efficiencies where obstructed because of the big outsourcer/middleman in the middle...
  • ...I never said Indian development didn’t work, I said Indian outsourcing companies don’t work.
  • ... A business process typically employed by Indian outsourcers is to offer extremely low rates to start, but then increase them substantially once a dependency is established.
  • ...Maintenance/migration/administration work great…but sophisticated projects rarely succeed (or, if they succeed, they don’t do so with the savings promised).


Tuesday, October 14, 2008

In the name of Vendor Consolidation

I had to rephrase the post as I was pointed out that somehow some really intelligent people out there that had time to pick the clues in the post to figure out about whom I am really talking about (but surprisingly incapable of getting the message). Even if there were any clues, I know they were decipherable by only those who could relate with it. So I fail to understand how any secret was revealed? In fact those who fear of secrets should fear about the knowledge that I have gained while working for them - the knowledge about the product, the program, the future direction and plans for the program, the technology used especially for testing. It's possible that I can try to utilize the same when in other assignments even if I may not explicitly say that it is from such and such previous experience :-).

Wednesday, February 28, 2007

Bill Gates on Software Testing

I don't know when did he say but it is good to know the importance of testing explained by Bill Gates as follows:

"50% of my company employees are testers, and the rest spend 50% of their time testing!”
- Bill Gates, 1995

Friday, February 02, 2007

Regression Testing

I am a student on James Bach's Rapid Software Testing Online course. There in that course we have a class forum where we can discuss on testing topics, challenges. It is one of the great source of information for me now. In the forum I have been discussing on Regression Testing topic. The forum topic was started to share the Regression Testing Strategies post by Bj Rollison. This post is mostly based upon those discussions and particularly the contributions made by Erwin van Trier in the post. I must thank Erwin for his views which helped me in thinking deeply and clarifying some concepts on Regression Testing.

Now here in this post I will share some of the insights from the forum discussion. As far as definition of Regression Testing goes Erwin likes to say it is an activity of "retesting (of) earlier tested features/functionality". James Bach defines it as "any testing motivated by the risk that a change to the product could have harmed it in some way". These two definitions are very different from the commonly held view that it is "repetition of previously executed tests". This later definition does not have scope of designing new tests targeting the regressions that may have been introduced in the system by the change/additions. I think it is not a definition at all because it does not say "what" is regression testing; it tells "how" to perform regression testing i.e., by repeating previously executed tests. So it is kind of leaving us with no choice but to execute the previous tests to find any regressions. But what if the tests cases were not complete? We will miss any regressions if those test cases do not touch upon possibly regressed function/path. So we need to design new test cases also as a part of regression test planning. So, regression test strategy includes (I don't think it is the exhaustive strategy) ... adopting some points from Bj Rollison's post:


* Functional impact analysis
* Selecting previously written test cases (we may use our discretion depending on schedules, etc.)
* Selecting previously fixed functional defects
* And, writing additional test cases, too.

Now this whole set of test cases would be called Regression Test Suite. After executing this suite the tests may find some bugs. For some all these bugs are regression bugs as they are found during regression testing phase. For some this differentiation of regression bugs from non-regression bugs is not needed. (The non-regression bugs means the bugs found during regression testing which have nothing to do with the changes/additions made in this build.) I was able to think of one benefit for this classification i.e., we can say whether our regression suite was successful in identifying regression and hence whether the strategy adopted for designing regression test suite was effective enough. But I think I am not convincing enough in stating that as benefit. May be I know what benefit is but not able to put it in words. Or may be there is no benefit at all?

Thursday, January 04, 2007

Do we use Test Design Techniques?

The following article was published by me on StickyMinds. For the sake of discussion I am reproducing the same here.

Test case identification is the essential skill that every test engineer must possess. A test case is "a set of inputs, execution conditions, and expected results developed for a particular objective." It is also the "smallest entity that is always executed as a unit from beginning to end." Ideally executing a program using every possible input or ensuring all the possible paths of the program are executed it could be said that 100% of the program has been tested. But generally it has now been established that exhaustive input testing or exhaustive path testing is not possible.

To overcome these challenges many test design techniques/methods/approaches are available that systematically narrow down the number of test cases in an effective way allowing the broadest testing coverage with the least effort. A number of books and articles have been written on various test design techniques. In fact nowadays test design techniques seem to be an actively researched area. As a result a lot of knowledge base is being made available to the software testing practitioners. But are the practitioners making use of this? I think not. Myers wrote in his book that " . . . there is no guarantee that a person has used a particular methodology . . . properly and rigorously." Dustin reiterated the same in his book saying that "While test techniques have been documented in great detail, very few test engineers use a structured test-design technique."

Most of the time the activity of test case identification could be seen as an ad-hoc activity done by a group of so-called experienced testers or by new comers in this field under the guidance of the former. Testers are found to be using their experience, intuition and analytical skills to derive the test cases. In numerous instances when asked to a tester "How do you identify test cases?" I get a plain answer "By using Functional Specification." In a few cases they would add other specification names too. Then I try to be more specific and ask "I want to know how from those specifications the test cases are identified?" Here the answer comes: "After reading the specifications if there are any doubts and clarifications, we send them across to the spec writer and once the clarifications are okay the test cases are identified after reading those specifications."

That was my experience conducting some interviews. But in my on-job experience till now I have never seen any colleague using any formal test case design method. At least I never saw them drawing those Cause-Effect graphs, State-Charts, Activity Diagrams etc. May be they did all this thought-process in their mind. But considering the kind of complex modules sometimes they were handling I feel it was just impossible to do this kind of analysis in mind.

What could be the reasons for not using the formal test design methods? Are they not useful? Are they so complicated that it is difficult to use? Does the inputs required for utilizing these methods not as expected? Or are they not known?

They are useful as number of studies and examples could be found proving this. Some methods could be complicated but that again depends on the level of training and the motivation an individual has in using it. But yes, I have found in my interviewing experience that the methods are not known or even if known they are known just because they might have read some interview tips. And in many cases it could also be the case that the inputs like specification documents may not be conducive for applying these techniques or the processes of designing tests are not as clear cut as the processes of software design and testing.

Ad-hoc identification of test cases without following any formal test design techniques has every possibility of them being unreliable, redundant, and having inadequate test coverage. So what could be done to ensure that test design techniques are applied in analyzing the specifications and this analysis is used to derive high-yield test cases? How do we bring that rigor, discipline, commitment in the test team?

Even today Software Testing is not given due importance in the curriculum. That means most individuals are not trained in using the test design techniques. Also it can be noticed that Software Testing is not the first choice or at least voluntary choice among Computer/IT graduates. This also should explain the low motivation in learning the test design techniques. In fact, at least in Hyderabad, India there are a number of private courses run by different institutes on software testing. But all of them deal with some kind of automation tool training. None of the available courses teach formal test design techniques and their practical application.

Because the practitioners have little training and experience in using these techniques practically, their utility is hardly seen by them. Moreover over the period of time as they are put into one project and then another, designing test cases just by skimming through the specifications (and sometimes by playing with the software when specifications are not present) quickly and intuitively becomes a habit. Even they start to think that this ad-hoc method works and gives them a false feeling of being expert. They start justifying their method in the name of exploratory testing, error-guessing, experience, etc.

So
1. Test engineers should be trained in practical application of known test design techniques.
2. It should be made mandatory to explicitly use the applicable test design techniques in the test design phase.
3. Every test engineer working on test case identification should be asked to document and present its analysis.
4. During presentation a list all known test design techniques should be present. It could be a chart or it could be written on white-board.
5. Techniques not used by the Test Engineer should be identified and then its applicability should be discussed. Test Engineer should be able to justify its non-applicability.
6. If no applicable technique is found, Test Engineer should be able to prove the completeness of the test cases identified demonstrating the rigor with which the technique is applied to derive the test cases.

Tuesday, December 26, 2006

Ideal Software Tester

I am not sure whether I am a software tester by virtue or not but at least I am a software tester by choice (in the beginning I had an option to choose development or testing). Also I am not sure if I am test obsessed or test infected. Since I chose to be a software tester by profession without having any formal training in computers and software testing, I try to learn from many sources. I consider myself to be a mere practitioner in software testing who uses ideas developed by many experts who freely share their wisdom. In that pursuit sometimes I wonder if I have what it needs to be a successful tester. “First, Break All the Rules” started me into thinking if I have the talents required for software testing. But what are those talents? I tried to solicit some help and I was advised to take a crack at identifying tester talents on my own. So here I go and make an attempt at listing the talents for software testers. It is not my original! I found this list in a PowerPoint presentation by Constance Colthorp. I understand there could be other sources where I can find this. But this was handy with me. I must admit that it will take time for me to think originally on the topic of tester talents; for that matter on any topic in software testing.

Constance listed following points for an "ideal QA tester":

  • Great attention to detail and an eye for details
  • Not easily bored
  • Can maintain focus on a given task
  • Divergent thinker; open to many alternatives
  • Willing to repeat the same task time and again
  • Good tolerance for ambiguity
  • Willing to accept that ‘you can’t catch them all’ (or fix every ‘bug’ that is identified)

And for me as all of them seem to be "patterns of thought, feeling, or behavior" I consider them to be some of the talents of a software tester.

Tuesday, December 19, 2006

Many times we may come across situations where we may have missed some test cases or we may have written better bug report. For that moment we may learn from them, but over the period of time we may tend to forget that learning. I thought blogging about these kinds of things in the form questions would help me in cataloguing them for future use. So here it is to start with...

Consider an application (APP1) which collects some user information and puts into a database (DB1). One of the information that it collects is say a PostalCode. Maximum number of digits/characters allowed in PostalCode is 10.

There is another application (APP2) that provides Search capability to users. But this APP2 do not perform the search in the same database DB1. It has it's own database DB2. The data is pulled into DB2 from DB1 using ETL jobs.

That was about application architecture.

Now coming to the search functionality... user is provided with a web page where she can enter search criteria and submit the query. The text field for PostalCode accepts maximum five characters. When user submits the form, the UI code basically calls a stored procedure in the backend. And unfortunately this stored procedure does an exact match (stored procedure does not use SQL keyword LIKE) for PostalCode. Due to this if user enters say 12345 as PostalCode user gets to see only those records whose PostalCode is exactly 12345; they don't see records with PotstalCode say 1234567, 123458, etc. But the end users expect these also to show up in the search results.

Whatever may be the reason, the fix for this issue was given as follows. The ETL job was modified to truncate the PostalCode if it is more than 5 characters long and then loaded into the Search application’s database (DB2).

Now as a tester what test cases we should write/execute to verify this fix?