Friday, September 4, 2015

Command Line Debug

GDB debug

'sayHello' being the program binary

> gdb sayHello

> b 10  - set the breakpoint at line 10

> run   - run the program under debugger

> print 'variable_name'  - print the variable name

Ctrl+x , a    will enable the gui based debugging in the gdb.

Tuesday, May 5, 2015

Compiling/Installing Mainline Linux Kernel on Ubuntu

Canonical provides Ubuntu flavors of mainline kernel. However I wanted to install the latest kernel, straight from the Linus's git repo . :)

I used a virtual machine environment to compile and install the kernel. We can install the following
software packages using Ubuntu's package manager (apt).

- Qemu - virtualization platform
- Virt-manager - GUI for qemu

I used 14.04 Ubuntu distribution - downloaded it from the canonical site.
Create a new virtual machine and install Ubuntu, log in to the machine. From here on-wards, we are inside our VM.

Now we checkout the mainline kernel from Linus's github repo.

> git clone --depth 3

Here i have selected depth as 3 to avoid entire version history checkout. Go inside the linux source directory.

> cd linux

Lets first change the target name of our kernel by editing the Makefile, change the param to,


Now we have to prepare the configuration of the target Linux kernel. Here we are configuring the set of modules that we want to package with our kernel. Manual config is a tedious task. We have two options,

1. Get the config of the current Ubuntu distribution and reuse it.
2. Get the list of loaded modules of the currently running system and use it as the config.

If you are following the first option, then copy the config-xxx file found under /boot directory of Ubuntu to our Linux source directory as .config

>cp /boot/config-3.13.0-40-generic .config
>make oldconfig

However this config has large number of modules, which don't get loaded at runtime at all. Hence we can create a much light weight system by creating a config out of our currently loaded modules in the system.
Hence I used,

>make localmodconfig

Since our mainline kernel is the bleeding edge, it will require some entirely new configs. The config program will prompt you for input, accept the default values. (just press and hold ENTER)

Now we have config suitable for our target environment. Lets compile the kernel source,

> make > /dev/null

Once the build is complete, our shiny new kernel image can be found at:


Next step would be to install it. Before that we have to install the kernel modules we built along with our kernel.

>sudo make INSTALL_MOD_STRIP=1 modules_install

This will install the new modules under,  /lib/modules/4.1.0-test+

Now install the kernel,

>sudo make install

Create the initramfs file system for our new kernel, This is the ram file system that is get loaded before real file system setup.

>sudo update-initramfs -c -k 4.1.0-test+

Finally update the grub, so it can find our new kernel during the boot process.

>sudo update-grub

Now if you examine the /boot directory, you should see newly created files,

- vmlinuz-4.1.0-test+
- config-4.1.0-test+

Before restart, lets configure the GRUB menu, so that we can choose our kernel during the boot process. Edit, /etc/default/grub file and comment out the below lines. Otherwise, the grub would not display the splash screen and will go on to boot the latest kernel.


Restart the virtual machine.

During the boot process select, 'advanced options' from grub menu

And select the new kernel.

Once you are log in to the system, just double check using the shell command,

>uname -a 

We are done.! 

Monday, May 4, 2015

Intel Processor Micro-architecture Code Names

It follows a tick-tock naming convention.

Tock - New micro-architecture while keeping the process technology the same.
Tick  - Improving on the previous micro-architecture using process technology. (45nm etc.)

Nehalem (tock)         45nm
Westmere (tick)        32nm

Sandy Bridge (tock)  32 nm
Ivy Bridge (tick)        22 nm

Haswell (tock)            22nm
Broadwell (tick)          14nm

source - Wikipedia Articles

Tuesday, July 16, 2013

[Webinar] Extending WSO2 Carbon for Your Middleware Needs

I am going to conduct a webinar on the $title. In this webinar we are going to cover the possible usages of the extension points provided by Carbon platform. Carbon is a very powerful middle-ware platform as it is... , however it provides enough flexibility to the developer if he/she wants to alter the default behavior.

Carbon Runtime View - diagram by sameeraJayasoma.

In this webinar we are going to cover topics such as ,

1. Authentication framework in Carbon
2. Deployment engine and how to introduce your own artifact deployment model
3. Component architecture in Carbon, and how it benefits.
4. How to make use of extension points to achieve monitoring requirements
5. etc.

I am going to take few use-cases and explain how we can achieve them using Carbon extension points. If time permits i'm hoping to do some demos as well. Interested ? :) Please register via this link,

Saturday, June 29, 2013

Enabling SAML2 SSO for WSO2 Carbon Server, OpenSSO/OpenAM as the IDP

WSO2 Carbon products comes with in-built web-SSO authenticators. Within minutes, you can enable web-SSO for any WSO2 Carbon server using WSO2 Identity Server as the IDP. In this blog post we are using, OpenSSO/OpenAM as the IDP and do the configuration.


1. Download and install openAM/openSSO [download the war file from here]
2. Download the WSO2 product.

Setting up the environment

Configuring OpenSSO/OpenAM

openSSO provides two mechanisms to register an service provider,
  • Creating a SP fedlet
  • Setting up a SP using a meta file called sp.xml
In this post I'm using the latter approach.

  1. Configure the sp.xml file.

  • The given sp.xml sample file uses, https://localhost:9443/acs as the redirection URL. Configure it according to your environment. https:///acs
  • EntityID element of the sp.xml should match the corresponding value of ‘ServiceProviderID’ in the authenticators.xml file



2. Go to Common Tasks -> Register Remote Service Provider link and select the sp.xml as the file    to uploaded and select a Circle of Trust.


3. Go to Federation ­> entity providers in the openSSO management console and select the
newly registered service provider.Select/tick response signing attribute.

Under name ID format list, make sure you specify, ‘transient’ and ‘unspecified’ name ID

Setting up WSO2 Carbon Server

1. Enable the SSO authenticator and configure the IDP URL in authenticators.xml found under



Change the following params accordingly,
  • ServiceProvideID - This can be any identifier. Doesn’t have to be a URL. However the configured value should be equal to the value we configured in the ‘’ in sp.xml
  • IdentityProviderSSOServiceURL - The URL of your IDP
  • idpCertAlias - This is the certificate that get used during response validation from the IDP. OpenSSO servers’ public key should be imported to the Carbon servers keystore with the alias name ‘opensso’

Exporting/Importing Certificates

Add the public key of the selected circle of trust in to the Carbon keystore(wso2carbon.jks) found
under $CARBON_HOME/resources/security/wso2carbon.jks. You can use Java keytool to do that.

-Exporting a public key

Here we will be using the default shipped openSSO keystore certificate. It has the alias name of ‘test’
and typically located in /home/opensso/opensso/keystore.jks. The default password is ‘changeit’. To
export the public key of ‘test’,

keytool -export -keystore keystore.jks -alias test -file test.cer

The public key will get stored in ‘test.cer’ file. you can view the certificate content with the command,

keytool -printcert -file test.cer

- Importing a public key to the wso2carbon.jks

Now import the ‘test.cer’ into Carbon key stores found under $CARBON_HOME/repository/resources/security/wso2carbon.jks

keytool -import -alias opensso -file test.cer -keystore wso2carbon.jks

view the imported certificate using the command
keytool -list -alias opensso -keystore wso2carbon.jks -storepass wso2carbon

Testing the Environment

Try accessing the carbon management console. (eg. https://localhost:9443/carbon) The call will
redirect you to IDP (openSSO login page). Enter username and the password in the openSSO login
page. Once you properly authenticated you will redirect back you to the WSO2 Carbon product login
page as a logged in user.
Please note: The authenticated user has to be in the Carbon servers’ user-store for authorization
(permission) purposes. Since the above described test environment does not share the same user
store between IDP (openSSO server) and SP (Carbon server) i created a user called ‘amAdmin’ in
Carbon server user store. Otherwise there will be a authorization failure during the server login.

"[2013-06-17 10:22:04,601] ERROR
{org.wso2.carbon.identity.authenticator.saml2.sso.SAML2SSOAuthenticator} - Authentication
Request is rejected. Authorization Failure."

Please Note: As of this writing, there are interop issues with released version of Carbon servers and OpenSSO. I have created a JIRA here, along with the patch to rectify the issue. Future Carbon releases will fix this issue.

Friday, April 26, 2013

Multiple Profiles and Shared Bundles With Eclipse P2 : Case Study

WSO2 Carbon is an OSGi based server framework. Number of WSO2 middleware products use Carbon as their base platform. Carbon make use of Eclipse Equinox as its OSGi framework implementation and use Eclipse P2 as its provisioning framework.

Problem Description

Some of the Carbon products has their own deployment patterns during actual production deployments. However to give a smooth evaluation experience the same product is available as a 'ready to go' all in one zip distribution. For an example, WSO2 Business Activity Monitor(BAM) allows enterprises to batch process their collected enterprise data and present it by means of human readable graphs/etc. A real production deployment includes, three main parts.

Figure 1

1. Receiver component responsible for receiving the events
2. Storage and Analyzer will analyze the stored data in batch mode.
3. presentation component will present the processed data by means of dashboard elements.

Even though these components are distinct by their functionality and by their deployment, they are all part of an one product. The first experience of the middleware developer should be seamless, means Business Activity monitor product should include all these components. However having all the components in the product resolves to a larger memory footprint during the run-time. During an actual deployment users have to deploy three seperate BAM instances even though they are interested only in one functionality of the product at a given time - they can't deploy only the receiver bit of BAM.

All of WSO2's Carbon based middleware products are available in the cloud as a multitenanted PAAS offering. This resolves to a atleast one product instance from each of the product types (Enterprise Service Bus, Application Server,etc). If we take per product distribution size to be ~100MB, then ten product instances will resolve in to 1GB. However we want to make our cloud offering available to users as a downloadable archive. This will enable them to easily download/setup their own private PAAS and evalute it - means total distribution size matters.

Solution 1 : Handling multiple profiles

Figure 2

There is one important aspect in the above described BAM scenario. All the different deployment entities can share the same configuration. Being differnt component of BAM server, the configuration requirement of the three different components are more or less the same. If we can selectively activate some of these functionalities using a switch mechanism, that would do the trick. The Eclipse P2 has a concept called profiles. Once you provision your OSGi application using P2, the application get assigned a profile. During the server startup, only the bundles that were provisioned under a particular profile get started. We created seperate profiles for each BAM component, namely receiver, analyzer and dashboard. During the server start-up user can select the profile by means of a system property. The default profile contains functionality of all of the BAM components just like good old days. The notion of profile is a logical partition that works with server provisioning framework. Each of these profiles share the same set of bundles in a shared repository, hence there is no increase in the distribution size.

Solution 2 : Shared bundle pool during run-time


The solution for the second problem, can be addressed using P2 profiles and shared bundle pools as well. However there is a suttle difference in the approach. Unlike the BAM use case, each of these different products have their own configuration area and data/persistance locations. We can tackle this requirement by completely removing the bundle pool location from the product distribution and placing it outside of all products. Each product will have their own configuration/distribution area and all the products will point to the same bundle location, hence distribution size will be much much less. The use case demands runtime isolation (running two or more products parallely using the same bundle location) and we can successfully achieve the same using P2 profiles and bundle pooling.


P2 bundle pooling functionality coupled with roaming (the ability to move your provisioned application from your original location - relies on relative paths) seems to be broken. However with the help from P2 dev-list we figured out a workaround.

wso2 BAM product presentation slides