Design a site like this with WordPress.com
Get started

Fall of the mighty – auto_ptr

Importance of semantics

Since C++03 – when the smart pointer was first introduced in the form of auto pointers, our lives got a bit easier. Auto pointers were very useful, specifically for Resource Acquisition Is Initialization(RAII) handling of heap pointers. But it was not much later that we saw its fall and as of C++ 11 it was already deprecated and in C++ 17 it was deleted from the standard library.

In this article lets shed some light on the fall of auto_ptr. First, let us quickly get to know auto_ptr in short.

What is auto_ptr?

Auto pointer i.e. auto_ptr is a wrapper around a resource that ensures that resource is destroyed after it leaves the scope. It transfers ownership when it is assigned to another object and replaces the stored value with a null pointer. (Remember this line as it will play a crucial role in the story that follows)

The fall

Let’s start this from the beginning. An early auto_ptr design accomplished the transfer using copy syntax and with an const auto_ptr as the source:

const auto_ptr<int> source(new int);
auto_ptr<int> target = source;  // move from const source to target

Because of this, it was possible to put it in a container. But it caused unexpected behavior or even crashes if the implementation of any function of the said container tried to store a local copy of any element of the sequence. For example, refer to the below code:

// somewhere in our code
sort(vec.begin(), vec.end(), indirect_less());

// implementation of sort
// ...
// value_type pivot_element = *mid_point;
// ...

The sort algorithm in the above example assumed that after the construction pivot_element and *mid_point were equivalent. However when value_type turned out to be an auto_ptr, this assumption failed, and subsequently so did the algorithm.

In other words, as auto_ptr’s copy and assignment operators were used to move the ownership of the pointer instead of copying it, making it prone to errors in generic code. A code that one would expect to initiate copy would suddenly start to move the ownership leading to failure and crashes. This made auto_ptr fundamentally unsafe.

To fix this, auto_ptr was made inhospitable to containers by disallowing copying from a const auto_ptr. Due to this, we started to get compile time error if we add it to a standard container. Unfortunately, this did not help in the case of the user-defined containers or even built-in arrays.

The final nail in the coffin

C++ 11 introduced unique_ptr. Unique pointers were not a 100% compatible replacement for the auto pointers because they did not use copy syntax to move ownership. But it did provide the move of ownership with std::move syntax. As it turned out to be a safer option than the auto pointer, it was decided to depreciate the auto pointer.

R E F E R E N C E S

smart pointers – GeeksforGeeks
auto_ptr – Microsoft Docs
Open Std Org – Why deprecate auto_ptr?
https://stackoverflow.com/a/3697737/10287964
I have consolidated a small part of information from above links to keep the content minimum. You can find more details about smart pointer and specifically auto pointer in above links.

Mystery of size of structs in C++

HINT: Probably it’s not what you think!

#include <iostream>

struct ABC {
    int n1;      // size of int is 4 byte
    int* n2;     // size of pointer is 8 byte
    char c1;     // size of char is 1 byte
    char* c2;    // size of pointer is 8 byte
};

int main()
{
    struct ABC a;
    std::cout << sizeof(struct ABC));
    return 0;
}

In the code above, what do you think would be the output?
21 bytes (4+8+1+8)? – check the hint at the start of the article!

One more hint: The answer for the code below is different than the one above.

#include <iostream>

struct BAC {
    int* n2;     // size of pointer is 8 byte
    int n1;      // size of int is 4 byte
    char c1;     // size of char is 1 byte
    char* c2;    // size of pointer is 8 byte
};

int main()
{
    struct BAC b;
    std::cout << sizeof(struct BAC));
    return 0;
}

Whattt??

Before answering the questions above, Let’s talk about a few things;

Structure Padding

Processor doesn’t read 1 byte at a time from memory. It reads 1 word at a time.
This means in a 32-bit processor, it will access 4 bytes whereas, in a 64-bit processor, it will access 8 bytes at a time.

Thus to save the number of CPU cycles required to access a structure, the compiler uses a concept called structure padding. This means that the members of a structure are stored left-aligned on the word boundary in the order they are defined.

  • So, in 1st example with 64-bit OS:
1st word4 byte int n1 and 4 empty bytes,
n2 pointer cannot be stored in same word
2nd word8 byte int pointer n2
3rd word1 byte char c1 and 7 empty bytes,
c2 pointer cannot be stored in same word
4th word8 byte char pointer c2

Thus in the first example, the struct would be 32 bytes

  • In 2nd example with 64-bit OS:
1st word8 byte int pointer n2
2nd word4 byte int n1, 1 byte char c1 and 3 empty bytes
c2 pointer cannot be stored in same word
3rd word8 byte char pointer c2

Whereas in the second example, the struct would require 24 bytes

Structure Packing

Even though this is the default behavior, we can save space by using #pragma pack(1)

#include <iostream>

#pragma pack(push, 1)
struct ABC {
    int n1;      // size of int is 4 byte
    int* n2;     // size of pointer is 8 byte
    char c1;     // size of char is 1 byte
    char* c2;    // size of pointer is 8 byte
};
#pragma pack(pop)

int main()
{
    struct ABC a;
    std::cout << sizeof(struct ABC));
    return 0;
}

This forces the compiler to pack structure members with a particular alignment smaller than the default of the target architecture.

In the above case with 64-bit OS, we are forcing it to pack with 1-byte boundary instead of the default 8 byte – thus using space exactly needed by the members of the structure.

NOTE: Using this may lead to poor performance as many systems work better on aligned data

R e f e r e n c e s

Why is the size of struct not equal to the sum of sizes of all members – StackOverflow
#pragma pack – Microsoft Docs
Structure padding and packing – OpenGenus(Abhishek Singh)
#pragma pack effect – StackOverflow
Effect of #pragma pack with different values
I have consolidated a small part of information from above links to keep the content minimum. You can find more details about Packing & Padding in above links.

How to be Responsive…?

In early 2010s, designers had to address a historic phenomenon – varied screen sizes and since then the device sizes that we use have spread even more over the size chart.

There are two main solutions to this –
1. ADAPTIVE design: Craft several versions of one design and make each have fixed dimensions.
2. RESPONSIVE design: Craft single flexible design that would shrink or fit the screen.

In this article, we will focus on responsive design.

Now, the big question is how do we make an app responsive? Here is how it can be done..
1. Make the most effective use of space and reduce the need to navigate
2. Take advantage of devices capabilities
3. Optimize for input – touch, pen, keyboard, mouse

Some of the common ways one can make their app responsive:

RePositionReposition
Resizing design elementsReSize
ReFlowReflowing design elements
Hiding design elementsShow/Hide
RePlaceReplacing design elements
an example of re-architecting a user interfaceReArchitect

Optimize Performance – Freezable Objects

Save on costly change notifications – Use Freezable objects

We all have seen that as the size of the application grows, the challenges related to runtime performance starts surfacing. One of the things that might get past our eyes unnoticed is resources consumed on change notifications of Dependency objects. One of the way using which we can optimize the performance is using capabilities of Freezable objects.
Before understanding how they do it, let us quickly get to know a bit about Freezable objects.

Freezable Objects

A Freezable object is a special type of object that has two states: unfrozen and frozen. When unfrozen, a Freezable object appears to behave like any other object. When frozen, a Freezable can no longer be modified.

A Freezable provides a Changed event to notify observers of any modifications to the object. Freezing a Freezable object can improve its performance, because it no longer needs to spend resources on change notifications.

NOTE:

  1. Freezable objects are unfrozen by default.
  2. Not every Freezable object can be frozen. To avoid throwing of InvalidOperationException, check the Freezable object’s CanFreeze property.
  3. Once frozen, a Freezable can never be modified or unfrozen; however, you can create an unfrozen clone using the Clone or CloneCurrentValue method.
  4. Regardless of which clone method you use, animations are never copied to the new Freezable.
  5. A frozen Freezable can also be shared across threads, while an unfrozen Freezable cannot.

USAGE:

Usage of freezable objects is pretty straight forward. The following example will give you a quick overview of it.

Button myButton = new Button();
SolidColorBrush myBrush = new SolidColorBrush(Colors.Yellow);          

if (myBrush.CanFreeze)
{
    // Makes the brush unmodifiable.
    myBrush.Freeze();
}

myButton.Background = myBrush;  

try 
{
    // Throws an InvalidOperationException, because the brush is frozen.
    myBrush.Color = Colors.Red;
}catch(InvalidOperationException ex)
{
    MessageBox.Show("Invalid operation: " + ex.ToString());
}

You can even freeze an object from xaml.

.
.
<!-- Namespaces -->
xmlns:PresentationOptions="http://schemas.microsoft.com/winfx/2006/xaml/presentation/options" 
.
.
.
<!-- This resource is frozen. -->
<SolidColorBrush x:Key="MyBrush"
                 PresentationOptions:Freeze="True" 
                 Color="Red" />
.
.
<!-- Rest of the file -->

R e f e r e n c e s

Freezable Objects Overview – WPF .NET Framework | Microsoft Docs
Freezable Class – Microsoft Docs
WPF Application Performance improvement Using Freezable Objects (c-sharpcorner.com)
I have consolidated a small part of information from above links to keep the content minimum You can find more details about Optimization using Freezable Objects in above links.

Differed execution can Hurt Us!

Let’s dive into the dark alleys of differed execution and see how it can hurt us!

We, C# developers have always loved LINQ; especially the fact that they offer differed execution! I mean, which programmer doesn’t like laziness 😀
Linus rightly said –

“Intelligence is the ability of avoiding work, yet getting the work done..”

Linus Torwalds

Well, accounting all the advantages that differed execution provides, we should still be aware of the monstrous disadvantages that lurks in the small, dark alleys of LINQ.

Let me give you a simple example,

// Movie Class

public class Movie
{
    public string Name { get; set; }

    private int year;
    public int Year
    {
         get
         {
              Console.WriteLine($"Returning year for {Name}");
              return year;
         }
         set
         {
              year = value;
         }
    }
}
// File with extension methods

public static class MyEnumerable
{
    public static IEnumerable<T> Filter<T>(this IEnumerable<T> source, Func<T, bool> predicate)
    {
        foreach(var item in source)
        {
            if(predicate(item))
            {
                yield return item;
            }
        }
    }
}

Did you guys notice that we used yield return instead of return. To extremely simplify the difference between these two, we can safely say that when we use return, we get value only after complete execution of function – execution at once; but using a yield return value means the return values are calculated only when needed – differed execution.

// File with use of extension methods
.
.
.
// somewhere in file
List<Movie> movies = new List<Movie>
{
    new Movie { Name = "3 Idiots", Year=2009 }, 
    new Movie { Name = "Baazigar", Year=1993 },
    new Movie { Name = "Queen", Year=2014 },  
    new Movie { Name = "The Sky is Pink", Year=2019 }
};


var query = movies.Filter(movie => movie.Year > 2000);
foreach(var movie in query.Take(2))
{
    Console.WriteLine(movie.Name);
}

// rest of file

This gives us following output:

Returning year for 3 Idiots 
3 Idiots 
Returning year for Baazigar 
Returning year for Queen 
Queen

As we can see this works perfect ! Now, lets make a small change…

var query = movies.Filter(movie => movie.Year > 2000);
Console.WriteLine(query.Count());
foreach(var movie in query.Take(2))
{
    Console.WriteLine(movie.Name);
}

Now the output looks like:

Returning year for 3 Idiots
Returning year for Baazigar
Returning year for Queen
Returning year for The Sky is Pink
3
Returning year for 3 Idiots
3 Idiots
Returning year for Baazigar
Returning year for Queen
Queen

Ouch! The list is being queried more times than expected, this is overwork – UNACCEPTABLE! So, do you see how differed execution can hurt us!

Now, lets get to the point – how do we avoid this accidental querying? It’s simple, we just force query to be executed and store the result and then use it. We can do this using different operations, in our case we can simply call ToList() method and store the object in a List.

var queryResult = movies.Filter(movie => movie.Year > 2000).ToList();
Console.WriteLine(queryResult.Count());
foreach(var movie in queryResult.Take(2))
{
    Console.WriteLine(movie.Name);
}

And as expected, the output is:

Returning year for 3 Idiots
Returning year for Baazigar
Returning year for Queen
Returning year for The Sky is Pink
3
3 Idiots
Queen

To Summarize

In the world with virtually infinite computational power, we often forget about the effect of a piece of code on total execution time. This is what; according to me, leads to a overall slower application. It is the difference between an application which could have been the Mona Lisa and now is just a stick figure 😉
So, differed execution, no matter how attractive should be handled with caution.

R E F E R E N C E S

Pluralsight – LINQ Fundamentals by Scott Allen : https://app.pluralsight.com/course-player?clipId=ca8d58c0-8dab-4660-94e1-ce2e5bfe08a6
More about yield: https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/keywords/yield
I have consolidated a small part of information from above links to keep the content minimum You can find more details about Differed execution and yield in above links.

Which to use? : Processes vs Threads

Having a dilemma of whether to use a thread or a process? Let’s take a look.

What is Process?

process, in the simplest terms, is an executing program. One or more threads run in the context of the process. Each process provides the resources needed to execute a program. The OS helps you to create, schedule, and terminates the processes which is used by CPU. The other processes created by the main process are called child process.

What is Thread?

thread is the basic unit to which the operating system allocates processor time. A thread can execute any part of the process code, including parts currently being executed by another thread. All threads of a process share its virtual address space and system resources. Thread can also be called as lightweight process.

Threads provide a way to improve application performance through parallelism. Threads represent a software approach to improving performance of operating system by reducing the overhead thread is equivalent to a classical process. Each thread belongs to exactly one process and no thread can exist outside a process.

Here are some key differences in processes and threads:

ParameterProcessThread
ResourceProcesses are independent and do not share resource with other processes.Threads share resources with each other.
WeightHeavierLightweight
Creation timeMoreLess
Communication timeMoreLess
Context switch timeMore time is required as it needs interaction with OSLess time is required as there is no need to interact with OS
Termination timeMoreLesss

When to use which?

It is a no-brainer to see the advantages of threads over process and why would anyone prefer threads over processes. So, why would anyone prefer processes over threads. To understand that let’s take the example of Google Chrome browser.

— EXAMPLE —
Unlike most current web browsers, Google Chrome uses many operating system processes to keep web sites separate from each other and from the rest of your computer. But why did they use multiple processes despite clear advantages of threads over processes?

Today, majority of websites have active web content, ranging from pages with lots of JavaScript and Flash to full-blown “web apps” like Gmail.  Large parts of these apps run inside the browser, just like normal applications run on an operating system. On top of this, the parts of the browser that render HTML, JavaScript, and CSS have become extraordinarily complex over time.  These rendering engines frequently have bugs as they continue to evolve, and some of these bugs may cause the rendering engine to occasionally crash.

In this world, browsers that put everything in one process face real challenges for robustness, responsiveness, and security.  If one web app causes a crash in the rendering engine, it will take the rest of the browser with it, including any other web apps that are open.

It doesn’t have to be this way, though.  Web apps are designed to be run independently of each other in your browser, and they could be run in parallel. They don’t need much access to your disk or devices, either. This means that it’s possible to more completely isolate web apps from each other in the browser without breaking them. Also, extra resources would not be allocated as compared to threads as there is no common resource. Google Chrome takes advantage of these properties and puts web apps and plug-ins in separate processes from the browser itself.  This means that a rendering engine crash in one web app won’t affect the browser or other web apps.  It means the OS can run web apps in parallel to increase their responsiveness, and it means the browser itself won’t lock up if a particular web app or plug-in stops responding.
— END OF EXAMPLE —

Thus, the Google Chrome application beautifully helps us understand when using a process can be an advantage to us. To summarize it, we would prefer processes over thread when:
1. The two tasks that needs to run parallely can run independently and there is no shared data between them.
2. They don’t need to access our disk and devices.
3. We do not want one task to affect the responsiveness of other tasks and the application as a whole.
4. From a Design perspective too if you might want to isolate your functionality into independent self contained modules where they may not really need to share the same address space or memory or even talk to each other.

References

https://docs.microsoft.com/en-us/windows/win32/procthread/about-processes-and-threads
https://www.guru99.com/difference-between-process-and-thread.html
https://www.tutorialspoint.com/operating_system/os_multi_threading.htm
https://stackoverflow.com/questions/617787/why-should-i-use-a-thread-vs-using-a-process
I have consolidated a small part of information from above links to keep the content minimum You can find more details about Processes vs Threads in above links.