Wednesday, November 30, 2016

Understading Stack and Heap

The stack is the memory set aside as scratch space for a thread of execution.

When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called.

The stack is always reserved in a LIFO order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.

The heap is memory set aside for dynamic allocation. 

Unlike the stack, there's no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.

Each thread gets a stack, while there's typically only one heap for the application (although it isn't uncommon to have multiple heaps for different types of allocation).



To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.


What is their scope?
The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.


What determines the size of each of them?
The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).


What makes one faster?
The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or free. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor's cache, making it very fast.


Understanding Generics in C#

Generics allow you to delay the specification of the data type of programming elements in a class or a method, until it is actually used in the program. In other words, generics allow you to write a class or method that can work with any data type.

When the compiler encounters a constructor for the class or a function call for the method, it generates code to handle the specific data type.

Generics is a technique that enriches your programs in the following ways:

  • It helps you to maximize code reuse, type safety, and performance.
  • You can create generic collection classes. The .NET Framework class library contains several new generic collection classes in the System.Collections.Genericnamespace. You may use these generic collection classes instead of the collection classes in the System.Collections namespace.
  • You can create your own generic interfaces, classes, methods, events, and delegates.
  • You may create generic classes constrained to enable access to methods on particular data types.
  • You may get information on the types used in a generic data type at run-time by means of reflection.


Simple example on Generics -

// Declare the generic class.
public class GenericList<T>
{
    void Add(T input) 
   {
      //Do something here
   }
}

class TestGenericList
{
    private class ExampleClass 
   { }
   
    static void Main()
    {
        // Declare a list of type int.
        GenericList<int> list1 = new GenericList<int>();

        // Declare a list of type string.
        GenericList<string> list2 = new GenericList<string>();

        // Declare a list of type ExampleClass.
        GenericList<ExampleClass> list3 = new GenericList<ExampleClass>();
    }

}

Monday, October 24, 2016

Difference between Performance Testing, Load Testing and Stress Testing


Performance Testing
Performance testing is the testing, which is performed, to ascertain how the components of a system are performing, given a particular situation. Resource usage, scalability and reliability of the product are also validated under this testing. This testing is the subset of performance engineering, which is focused on addressing performance issues in the design and architecture of software product.

Performance Testing Goal:

The primary goal of performance testing includes establishing the benchmark behaviour of the system. There are a number of industry-defined benchmarks, which should be met during performance testing.

Performance testing does not aim to find defects in the application, it address a little more critical task of testing the benchmark and standard set for the application. Accuracy and close monitoring of the performance and results of the test is the primary characteristic of performance testing.

Example:

For instance, you can test the application network performance on Connection Speed vs. Latency chart. Latency is the time difference between the data to reach from source to destination. Thus, a 70kb page would take not more than 15 seconds to load for a worst connection of 28.8kbps modem (latency=1000 milliseconds), while the page of same size would appear within 5 seconds, for the average connection of 256kbps DSL (latency=100 milliseconds). 1.5mbps T1 connection (latency=50 milliseconds) would have the performance benchmark set within 1 second to achieve this target.

For example, the time difference between the generation of request and acknowledgement of response should be in the range of x ms (milliseconds) and y ms, where x and y are standard digits. A successful performance testing should project most of the performance issues, which could be related to database, network, software, hardware etc…


Load Testing
Load testing is meant to test the system by constantly and steadily increasing the load on the system till the time it reaches the threshold limit. It is the simplest form of testing which employs the use of automation tools such as LoadRunner or any other good tools, which are available. Load testing is also famous by the names like volume testing and endurance testing.

The sole purpose of load testing is to assign the system the largest job it could possible handle to test the endurance and monitoring the results. An interesting fact is that sometimes the system is fed with empty task to determine the behaviour of system in zero-load situation.

Load Testing Goal:

The goals of load testing are to expose the defects in application related to buffer overflow, memory leaks and mismanagement of memory. Another target of load testing is to determine the upper limit of all the components of application like database, hardware and network etc… so that it could manage the anticipated load in future. The issues that would eventually come out as the result of load testing may include load balancing problems, bandwidth issues, capacity of the existing system etc…

Example:

For example, to check the email functionality of an application, it could be flooded with 1000 users at a time. Now, 1000 users can fire the email transactions (read, send, delete, forward, reply) in many different ways. If we take one transaction per user per hour, then it would be 1000 transactions per hour. By simulating 10 transactions/user, we could load test the email server by occupying it with 10000 transactions/hour.

Stress Testing
Under stress testing, various activities to overload the existing resources with excess jobs are carried out in an attempt to break the system down. Negative testing, which includes removal of the components from the system is also done as a part of stress testing. Also known as fatigue testing, this testing should capture the stability of the application by testing it beyond its bandwidth capacity.

The purpose behind stress testing is to ascertain the failure of system and to monitor how the system recovers back gracefully. The challenge here is to set up a controlled environment before launching the test so that you could precisely capture the behaviour of system repeatedly, under the most unpredictable scenarios.

Stress Testing Goal:

The goal of the stress testing is to analyse post-crash reports to define the behaviour of application after failure. The biggest issue is to ensure that the system does not compromise with the security of sensitive data after the failure. In a successful stress testing, the system will come back to normality along with all its components, after even the most terrible break down.

Example:

As an example, a word processor like Writer1.1.0 by OpenOffice.org is utilized in development of letters, presentations, spread sheets etc… Purpose of our stress testing is to load it with the excess of characters.

To do this, we will repeatedly paste a line of data, till it reaches its threshold limit of handling large volume of text. As soon as the character size reaches 65,535 characters, it would simply refuse to accept more data. The result of stress testing on Writer 1.1.0 produces the result that, it does not crash under the stress and that it handle the situation gracefully, which make sure that application is working correctly even under rigorous stress conditions

Software Installation/Uninstallation Testing


Have you performed software installation testing? How was the experience? Well, Installation testing (Implementation Testing) is quite interesting part of software testing life cycle.

Installation testing is like introducing a guest in your home. The new guest should be properly introduced to all the family members in order to feel him comfortable. Installation of new software is also quite like above example.

If your installation is successful on the new system then customer will be definitely happy but what if things are completely opposite. If installation fails then our program will not work on that system not only this but can leave user’s system badly damaged. User might require to reinstall the full operating system.

In above case will you make any impression on user? Definitely not! Your first impression to make a loyal customer is ruined due to incomplete installation testing. What you need to do for a good first impression? Test the installer appropriately with combination of both manual and automated processes on different machines with different configuration. Major concerned of installation testing is Time! It requires lot of time to even execute a single test case. If you are going to test a big application installer then think about time required to perform such a many test cases on different configurations.

We will see different methods to perform manual installer testing and some basic guideline for automating the installation process.

To start installation testing first decide on how many different system configurations you want to test the installation. Prepare one basic hard disk drive. Format this HDD with most common or default file system, install most common operating system (Windows) on this HDD. Install some basic required components on this HDD. Each time create images of this base HDD and you can create other configurations on this base drive. Make one set of each configuration like Operating system and file format to be used for further testing.

How we can use automation in this process? Well make some systems dedicated for creating basic images (use software’s like Norton Ghost for creating exact images of operating system quickly) of base configuration. This will save your tremendous time in each test case. For example if time to install one OS with basic configuration is say 1 hour then for each test case on fresh OS you will require 1+ hour. But creating image of OS will hardly require 5 to 10 minutes and you will save approximately 40 to 50 minutes!

You can use one operating system with multiple attempts of installation of installer. Each time uninstalling the application and preparing the base state for next test case. Be careful here that your uninstallation program should be tested before and should be working fine.

Installation testing tips with some broad test cases:

1) Use flow diagrams to perform installation testing. Flow diagrams simplify our task. See example flow diagram for basic installation testing test case. 

Add some more test cases on this basic flow chart Such as if our application is not the first release then try to add different logical installation paths.

2) If you have previously installed compact basic version of application then in next test case install the full application version on the same path as used for compact version.

3) If you are using flow diagram to test different files to be written on disk while installation then use the same flow diagram in reverse order to test uninstallation of all the installed files on disk.

4) Use flow diagrams to automate the testing efforts. It will be very easy to convert diagrams into automated scripts.

5) Test the installer scripts used for checking the required disk space. If installer is prompting required disk space 1MB, then make sure exactly 1MB is used or whether more disk space utilized during installation. If yes flag this as error.

6) Test disk space requirement on different file system format. Like FAT16 will require more space than efficient NTFS or FAT32 file systems.

7) If possible set a dedicated system for only creating disk images. As said above this will save your testing time.

8 ) Use distributed testing environment in order to carry out installation testing. Distributed environment simply save your time and you can effectively manage all the different test cases from a single machine. The good approach for this is to create a master machine, which will drive different slave machines on network. You can start installation simultaneously on different machine from the master system.

9) Try to automate the routine to test the number of files to be written on disk. You can maintain this file list to be written on disk in and excel sheet and can give this list as a input to automated script that will check each and every path to verify the correct installation.

10) Use software’s available freely in market to verify registry changes on successful installation. Verify the registry changes with your expected change list after installation.

11) Forcefully break the installation process in between. See the behavior of system and whether system recovers to its original state without any issues. You can test this “break of installation” on every installation step.

12) Disk space checking: This is the crucial checking in the installation-testing scenario. You can choose different manual and automated methods to do this checking. In manual methods you can check free disk space available on drive before installation and disk space reported by installer script to check whether installer is calculating and reporting disk space accurately. Check the disk space after the installation to verify accurate usage of installation disk space. Run various combination of disk space availability by using some tools to automatically making disk space full while installation. Check system behavior on low disk space conditions while installation.

13) As you check installation you can test for uninstallation also. Before each new iteration of installation make sure that all the files written to disk are removed after uninstallation. Some times uninstallation routine removes files from only last upgraded installation keeping the old version files untouched. Also check for rebooting option after uninstallation manually and forcefully not to reboot.

I have addressed many areas of manual as well as automated installation testing procedure. Still there are many areas you need to focus on depending on the complexity of your software under installation. These not addressed important tasks includes installation over the network, online installation, patch installation, Database checking on Installation, Shared DLL installation and uninstallation etc.

Hope this article will be a basic guideline to those having trouble to start with software installation testing both manually or in automation.

Monday, September 5, 2016

What is Usability Testing

Usability testing is a technique to evaluate a product by testing it on users. Since the end user has to work on product ultimately it is best to do a usability testing before releasing the product to mass.

Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human-computer interaction studies attempt to formulate universal principles.


Goals
Usability testing is a black-box testing technique. The aim is to observe people using the product to discover errors and areas of improvement. Usability testing generally involves measuring how well test subjects respond in four areas: efficiency, accuracy, recall, and emotional response. The results of the first test can be treated as a baseline or control measurement; all subsequent tests can then be compared to the baseline to indicate improvement.

Four Areas 
Efficiency -- How much time, and how many steps, are required for people to complete basic tasks? (For example, find something to buy, create a new account, and order the item.)

Accuracy -- How many mistakes did people make? (And were they fatal or recoverable with the right information?)

Recall -- How much does the person remember afterwards or after periods of non-use?

Emotional response -- How does the person feel about the tasks completed? Is the person confident, stressed? Would the user recommend this system to a friend?

How to perform Localization Testing


Localization means translating your product or website to local language of that country. As companies grow their business in various countries, they make a localized version of their website.


Prepare and use the required test environment
If a web site is hosted in English and Japanese languages, it is not enough to simply change the default browser language and perform identical tests in both the languages. Depending on its implementation, a web site may figure out the correct language for its interface from the browser language setting, the regional and language settings of the machine, a configuration in the web application or other factors. Therefore, in order to perform a realistic test, it is imperative that the web site be tested from two machines – one with the English operating system and one with the Japanese operating system. You might want to keep the default settings on each machine since many users do not change the default settings on their machines.


Get correct translation
A native speaker of the language is usually the best resource to translate the text. However it is not easy to find a multi-lingual tester or have different people from different countries sit in one office.
In that case you might have to depend on translations tools available online like google translate wordreference.com and dictionary.com.


Start with testing control labels
Labels are the static content in the web site. English labels are usually short and translated versions tend to expand or contract in length depending on target language. It is important to spot any issues related to label truncation, overlay on/ under other controls, incorrect word wrapping etc.


Test error messages
It is important that the web site provides correct error messages in the other language. Most of the times error messages are hard coded in english language and while translating, developers forget to translate them.


Do test the data
Usually, multi-lingual web sites store the data in the UTF-8 Unicode encoding format. To check the character encoding for your website in mozilla: go to View -> Character Encoding and in IE go to View -> Encoding. Data in different languages can be easily represented in this format. Make sure to check the input data. It should be possible to enter data in the other language in the web site. The data displayed by the web site should be correct. The output data should be compared with its translation.


Be aware of cultural issues
A challenge in testing multi-lingual web sites is that each language might be meant for users from a particular culture. Many things such as preferred (and not preferred) colors, text direction (this can be left to right, right to left or top to bottom), format of salutations and addresses, measures, currency etc. are different in different cultures. Not only should the other language version of the web site provide correct translations, other elements of the user interface e.g. text direction, currency symbol, date format etc. should also be correct.


Saturday, July 2, 2016

Quality Attributes

What is Quality? 
Quality can be define in different manner. Quality definition may differ from person to person. But finally there should be some standards. So Quality can be defined as

Degree of excellence – By Oxford dictionary
Fitness for purpose – By Edward Deming
Best for the customer’s use and selling price – By Feigenbaum

Now lets see how can one measure some quality attributes of product or application. These attributes can be used for Quality assurance as well as Quality control. 


Reliability 
Measure if product is reliable enough to sustain in any condition. Should give consistently correct results.
Product reliability is measured in terms of working of project under different working environment and different conditions.


Maintainability 
Different versions of the product should be easy to maintain. For development its should be easy to add code to existing system, should be easy to upgrade for new features and new technologies time to time. Maintenance should be cost effective and easy. System be easy to maintain and correcting defects or making a change in the software.


Usability
This can be measured in terms of ease of use. Application should be user friendly. Should
be easy to learn. Navigation should be simple.
The system must be:
  Easy to use for input preparation, operation, and interpretation of output.
  Provide consistent user interface standards or conventions with our other frequently used systems.
  Easy for new or infrequent users to learn to use the system.


Portability
This can be measured in terms of Costing issues related to porting, Technical issues related to porting, Behavioral issues related to porting.


Correctness
Application should be correct in terms of its functionality, calculations used internally and the navigation should be correct. This means application should adhere to functional requirements.


Efficiency
To Major system quality attribute. Measured in terms of time required to complete any task given to the system. For example system should utilize processor capacity, disk space and memory efficiently. If system is using all the available resources then user will get degraded performance failing the system for efficiency. If system is not efficient then it can not be used in real time applications.

Integrity or security
Integrity comes with security. System integrity or security should be sufficient to prevent unauthorized access to system functions, preventing information loss, ensure that the software is protected from virus infection, and protecting the privacy of data entered into the system.

Testability
System should be easy to test and find defects. If required should be easy to divide in different modules for testing.


Flexibility
Should be flexible enough to modify. Adaptable to other products with which it needs interaction. Should be easy to interface with other standard 3rd party components.


Reusability
Software reuse is a good cost efficient and time saving development way. Different code libraries classes should be generic enough to use easily in different application modules. Dividing application into different modules so that modules can be reused across the application.


Interoperability
Interoperability of one system to another should be easy for product to exchange data or services with other systems. Different system modules should work on different operating system platforms, different databases and protocols conditions.


Appreciate your feedback/comments!!!

Friday, June 3, 2016

Testing for security attack

In my earlier post Security Testing I have explained what is security testing and what are different types of attacks that can occur on a website.

Now let's see what are different security testing approaches -


1. Test Password cracking
Most web applications use log-in screens to authenticate users. In password cracking tester should for password complexity enforced by website.
If username and password are stored in cookie make sure they are highly encrypted as without encrypting attacker can use different methods to steal the cookies.

2. Test URL manipulation 
The tester should check if the application passes important information in the query string (url). As url is easily accessible attacker can steal data from url. Tester can modify a parameter value in the query string to check if the server accepts it. Also test for the url entered directly in address bar without navigating from previous page.

3. Test SQL Injection
In UI controls like textboxes enter SQL statements which are always true like '1=1' (with quotes).
Make sure textbox does not accepts ('). If some database error is thrown after insterting above data that means application accepted the input, executed the statement on server. This is highly vunerable.

4. Cross Site Scripting (XSS)
The tester should also test for XSS (Cross site scripting). Any HTML code or any script code should not be accepted by the application. Many web applications use variables in url to pass data to server. E.g.:
http://www.mysite.com/Home.aspx?query=abcd
Attacker can easily pass some <script> code as a ‘query’ parameter. When page is sent, malicious <script> is executed on server.


Note - In order to perform a useful security test of a web application, the tester should have good knowledge of the HTTP protocol. It is important to have an understanding of how the client (browser) and the server communicate using HTTP. Additionally, the tester should at least know the basics of SQL injection and XSS.

Did you like the post? Please share your feedback!

Security Testing

Security testing is a testing process which tests an application for confidentiality, integrity, authentication, availability, authorization and non-repudiation.

In short words we can say verifying that data is available and accessible to authentic users only and amount of data available to any user is as per their authorization level.

As more and more online transaction being performed online through website, proper security testing of web applications is becoming very important.

Below are various type of popular security attacks - 

URL manipulation - 
Some web application send user data to server after appending to the url. This gives hacker a chance to manipulate the data and send wrong information.


SQL injection
In this process SQL statement are inserted into UI controls of the application. When the page is submitted to server, those statements are executed on server causing attack on user data.


Spoofing
Attacking users by creating hoax look-alike websites or emails. So user navigates to their site thinking it is the original site and enters sensitive data.


Attacking XSS
Cross-site scripting allows attackers to inject client side script and bypass access controls.

In next post we will look at different approaches to test website for security attacks.

Please let me know your feedback about this post.



Friday, May 20, 2016

Deserialize JSON to C# Object

JSON (JavaScript Object Notation) is a lightweight data-interchange text format.
Due to light weight characteristic, it is now a days heavily used as data interchange medium.
Web services are exposing data in JSON format.

In this example we will look at sample code in C# to consume JSON data and deserialize it in class objects

Suppose one of such web service exposes data in below JSON format - 

[{"StudentNumber":1,"StudentFirstName":"Tom","StudentLastName":"Alter"},{"StudentNumber":2,"StudentFirstName":"Bruce","StudentLastName":"Lee"},{"StudentNumber":3,"StudentFirstName":"Bret","StudentLastName":"Lee"},{"StudentNumber":4,"StudentFirstName":"Mickey","StudentLastName":"Mouse"},{"StudentNumber":5,"StudentFirstName":"Donald","StudentLastName":"Duck"},{"StudentNumber":6,"StudentFirstName":"Vicky","StudentLastName":"Joseph"}]

Following sample will provide you one way of deserializing JSON data to C# objects and help you to understand it.

1. First we create a public class with public properties. These properties correspond to members of JSON. For above sample JSON public class would be -

public class Student
{
public int StudentNumber { get; set; }
public string StudentFirstName { get; set; }
public string StudentSecondName { get; set; }
} 

In sample JSON we can see StudentNumber, StudentFirstName and 
StudentSecondName are repeating so whole data will be stored in collection of type Student.

2. Next we write a method which will store the data to C# objects  - 
private void JSONToCsharp<T>()
{
WebClient wc = new WebClient();
wc.UseDefaultCredentials = true;
var data = wc.DownloadString(JsonUri);
MemoryStream ms = new MemoryStream(Encoding.Unicode.GetBytes(data));
DataContractJsonSerializer serializer = new DataContractJsonSerializer(typeof(List<T>));
var result = serializer.ReadObject(ms);
ms.Close();
ms.Dispose();
}

Once this method is executed, result can easily be type casted to Student and we will have a collection of Student,where  Student[0] will correspond to first record in JSON. ie. 
Student[0].StudentNumber will be 1
Student[0].StudentFirstName will be Tom
Student[0].StudentSecondName will be Alter

Lets look at the code in depth - 
Code is quite easy to understand, WebClient provides methods for sending data to and receiving data from a resource identified by a URI.

var data = wc.DownloadString(JsonUri)
This will download the JSON data to data object of type var.

DataContractJsonSerializer class serializes objects to the JSON and deserializes JSON data to objects. 

While calling above JSONToCsharp method, <T> is replaced with class name ie. Student
JSONToCsharp <Student>();

Finally result object will hold complete JSON data in the form of collection of Student
which can later will type casted like 
Student students = (Student)result;

Note - Following namespaces will be required to use WebClient and DataContractJsonSerializer classes.
using System.Net;
using System.Runtime.Serialization.Json;

Please do share your views about the post.

Tuesday, April 26, 2016

Verification and Validation

Verification

  • When we check that application is created as per the SRS (software Requirement Specification). Means application performs what user wants it to do.
  • It comes before validation.
  • Static testing like reviews is verification process.

Validation
  • When we test that application is performing its action in correct way. Here we do not check whether application meets user requirement or not, that part is already tested in verification. Here we test whatever application is doing, it does it in correct way.
  • It is performed after verification.
  • Test are executed in validation.

Let us go through one example - 

Requirement specification says that - 
User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately.

Then functional specification is created as follows - 
1. The UI will contain 4 checkboxes labelled according to rooms they control.
2. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox.
3. When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox.


Verification
We now verify that  - 
  • Requirement specification is complete and correct such that anyone can understand the requirement easily.
  • Functional specification creates design correctly.
  • Source code has functions for 4 checkboxes to send the signals.


Validation
Now we validate that - 
  • Checkboxes accepts input from user 
  • Lights are actually controlled by checkboxes.

Please let me know your views about the post.

Monday, April 25, 2016

Severity and Priority of bug


Every bug has two fields called as severity and priority that has always confused most of testers. 
Let us understand them with simple examples.

Severity 
It means how severe is the impact of bug on application.

Priority
It means how soon bug should be fixed.

Now we can four combinations of severity and priority.

High Priority & High Severity
Consider an application that maintains student details, and when new record is created application terminates abruptly.

High Priority & Low Severity
The spell mistakes that happens on the cover page or title of an application or company logo is missing.

Low Priority & High Severity
A feature that is rarely used in application is broken, say application has annual report feature but it picks wrong months like from Jan to Dec instead of April to March.

Low Priority and Low Severity
Any spell issues which is with in a paragraph or in alternate text of image.


Friday, April 22, 2016

Create Text File using C#


This post shows a sample code in C# to create a text file. This code can also be used to create a log file for any console or windows form based application.

It is assumed that you already have basic knowledge about C#.

class Program
{
  static void Main(string[] args)
  {
    String fileName = @".\File.txt";
    TextWriter fileWriter = new StreamWriter(fileName, true);
    String fileText = "First Line";
    fileWriter .WriteLine(fileText );

    fileText   = "SecondLine";
    fileWriter .WriteLine(fileText);

    fileWriter .Close();
  }
}

Understanding the code - 

@".\File.txt";
\ in C# has a special meaning. It is used to define an escape sequence. But here we want to use it to represent directory path. So to tell the compiler to not to treat \ as part of escape sequnce we can either put an extra \ with existing \ or put @ before starting the string. 
Strings with @ are called as verbatim string 

Below two strings as same - 
@".\File.txt";
".\\File.txt";

(.) represents current directory which contains the executable. If it is executed from VSTS IDE, (.) represents bin directory.

So overall .\File.txt means text file is in same directory where executable is.

TextWriter fileWriter = new StreamWriter(fileName, true);
This creates a text file in hard disk and associates a default buffer to it.
FileName contains path, file name and extension.
True means if file already exists, append the new text.

string file Text   = "SecondLine";
fileWriter .WriteLine( fileText);
WriteLine method writes the content of string in buffer associated with the file.

fileWriter .Close();
Finally Close() methods writes all the contents of buffer to actual file, saves the file and releases the buffer.

Did you like the post. Please share your thoughts.

Empty Recycle Bin programmatically using C#

Below is the sample code in C# to clean/empty the recycle bin of windows.

private static void EmptyRecycleBin()
{
string recycleLocation = String.Empty;
string strKeyPath = "SOFTWARE\\Microsoft\\Protected Storage System Provider";
 
RegistryKey regKey = Registry.CurrentUser.OpenSubKey(strKeyPath);
string[] arrSubKeys = regKey.GetSubKeyNames();
if (IsVista() || IsWin7())   //Methods are described below
{
  recycleLocation = "$Recycle.bin";
}
else
{
  recycleLocation = "RECYCLER";
}

ObjectQuery query = new ObjectQuery("Select * from Win32_LogicalDisk Where DriveType = 3");

ManagementObjectSearcher searcher = new ManagementObjectSearcher(query);

ManagementObjectCollection queryCollection = searcher.Get();

foreach (ManagementObject mgtObject in queryCollection)
{
  string strTmpDrive = mgtObject["Name"].ToString();

  // default is true
  foreach (string strSubKey in arrSubKeys)
  {
    string regKeySID = strSubKey;
    string recycleBinLocation = (strTmpDrive + "\\" +
                recycleLocation + "\\" + regKeySID + "\\");

    if (recycleBinLocation != "" &&
     Directory.Exists(recycleBinLocation))
    {
      DirectoryInfo recycleBin = new
        DirectoryInfo(recycleBinLocation);

      // Clean Files
      FileInfo[] recycleBinFiles = recycleBin.GetFiles();
      foreach (FileInfo fileToClean in recycleBinFiles)
      {
        try {
            fileToClean.Delete();
        } catch (Exception)
        {
            // Ignore exceptions and try to move next file
        }
      }

   // Clean Folders
   DirectoryInfo[] recycleBinFolders=recycleBin.GetDirectories();
   foreach (DirectoryInfo folderToClean in recycleBinFolders)
   {
     try {
         folderToClean.Delete(true);
     } catch (Exception)
     {
         // Ignore exceptions and try to move next file
     }
    }
    Console.WriteLine("Cleaned up location:
     {0}",recycleBinLocation);
   }
  }
 }
}

private static bool IsVista()
{
   string majorOSVersion =
            Environment.OSVersion.Version.Major.ToString();

   if (majorOSVersion.Equals(Convert.ToString(6)))
       return true;
}

private static bool IsWin7()
{
   string majorOSVersion =
      Environment.OSVersion.Version.Major.ToString();

   if (majorOSVersion.Equals(Convert.ToString(7)))
      return true;
}

Share it if you like it.

Extension Methods in C#

Suppose you have created a class where operations are performed on class fields using public methods.

If you want to add more methods to extend the functionality of this class, you will typically create a child class, write a method in child class and use this class whereever required.

With extension methods you don't need to create a child class. It enables you to add methods to existing types (classes) without creating a new derived type, recompiling, or otherwise modifying the original type.
Extension methods are actually static methods but of special type. They are called as if they were instance methods on the extended type.

They are introduced in C# from 3.0 version.

They are only in scope when the namespace in which  they are defined is explicitly imported into the source code with a using directive.

Example - LINQ methods like GroupBy(), OrderBy(), Average(), are extension methods on IEnumerable<T> types like List<T> or Array.
They are not defined in these types but can be accessed using dot operator on list objects.

Example -
class MyExtensionMethod
{
    static void Main()
    {          
        int[] arr = { 10, 45, 15, 39, 21, 26 };
        var result = arr.OrderBy(s => s); 
        foreach (var i in result)
        {
            Console.Write(i + " ");
        }        
    }      
}

Output - 10 15 21 26 39 45

Here OrderBy() method is an extension method.

Create your own extension method -
--------------------------------------
namespace MyExtensions
{
    public static class MyExtensionsMethod
    {

//Extension Method
        public static int WordCount(this String str)
        {
            return str.Length;
        }
    }
}

Parameter in public static int WordCount(this String str) i.e. specifies which type the method operates on, and the parameter is preceded by the this modifier.

And it can be called from an application as below:
using MyExtensions;
string s = "Hello Extension Methods";
int i = s.WordCount();

Note -
1. Extension methods cannot access private variables in the type they are extending.

2. An extension method with the same name and signature as an interface or class method will never be called. At compile time, extension methods always have lower priority than instance methods defined in the type itself. In other words, if a type has a method named Process(int i), and you have an extension method with the same signature, the compiler will always bind to the instance method.


Please share your views about this post.

Create Log file for Coded UI Tests

Logs containing results and other information helps in debugging the test failures. This post will help you to create a log file for coded UI test in Microsoft Visual Studio 2010.

Suppose there is one coded UI test method CodedUITestMethod1 with one recorded method as SignInTest in UIMap.Designer.cs.

[TestMethod]
public void CodedUITestMethod1()
{
   this.UIMap.SignInTest();
}

To add logs to it, modify partial class UIMap.cs as follows -

public void WriteLogs(string message)
{
  FileInfo f = new FileInfo(strAppPath + "Results.txt");
  StreamWriter w = f.AppendText();
  w.WriteLine(message);
  w.Close();
}

Call this method from test method as follows -

[TestMethod]
public void CodedUITestMethod1()
{
  this.UIMap.SignInTest();
  this.UIMap.WriteLogs("Logging Into Aplication Event Success");
}

Please share your views about this post.