some problems when commands are hashed

In Unix systems, commands are hashed for performance reasons [man hash]. Today, I got a problem related to this and wasted so much time for resolving. In my solaris machine, patch command is in /usr/bin/patch and its version is very old. So, I have installed a new patch version using pkgadd and it has installed patch command in /usr/local/bin/. In PATH variable, /usr/local/bin/ is before /usr/bin directory.  So, `which patch` command is showing /usr/local/bin/patch. But, when I execute patch on command prompt, it is executing old version of patch [/usr/bin/patch]. After some time, I tried type command `type patch` and it is showing /usr/bin/patch is hashed. Then, I came to know about hash command in solaris. After rehashing, every thing started working fine.

My first bug in OpenSolaris

Yesterday, while using getpass* functions & passwd command in solaris, I found a bug. Type passwd command and press ctrl-z to stop that process and bring that process to foreground using fg command. Now, you can see your password while typing it. Basically, tty are settings are reset when you stop the process and bring it to foreground.

when I sent a mail to Sun Security Coordination Team [secure@security.Eng.Sun.COM], they said it was found internally recently and already raised as a bug http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6443857 in OpenSolaris bug database. By mistake, as it is noted as security vulnerability in bug description, so it is not visible outside.

Projects in Solaris

When I was reading an interesting blog about prstat behavior, I read about projects in Solaris. After reading about projects, I came to know that they are mainly used for controlling resources used by process[es]. Suppose, if we want to control the CPU/disc/memory usages, then we create a project [using projadd, projmod] and in /etc/project, we have to modify resource control field. Then we can create/attach processes into this project [using newtask]. Once the process is created/attached in this project, all resource constraints will apply to that process. Suppose if the resource constraint name is process.max-file-descriptor and the value is 10, a process in that project can not have more that 10 file descriptors. Moreover, using prstat -J command, we can clearly see the resources used for projects. I felt this is a very good way to control resources used by specific processes, but I don’t know if the similar thing exists on other platforms like Linux. Some good links that I have referred are here & here.

Make vim to remember file cursor position : solaris

Some time back, I installed vim in Solaris after taking the package from sunfreeware site. I have configured colors also. But one main annoying thing is that it does not remember the cursor position of the same file opened previously. i.e. When I open a file and move to line no:1000 , quit and open the same file again, cursor will be placed at line no:0. After reading a few web pages [not with much help], I have read vim documentation [:help viminfo]. There I found that by typing `” [backquote & doublequote], cursor will be moved to the line where we left last time. But I dont want to type `” everytime I open a file. I want to automate this. After searching for `” pattern in my Linux vim config files, I found the following code snippet in /etc/vimrc in my SuSE.

if has(“autocmd”)
autocmd BufReadPost *
\ if line(“‘\””) > 0 && line(“‘\””) <= line(“$”) |
\ exe “normal g`\”” |
\ endif
endif

After I have added the above code in my solaris .vimrc, things began working properly. Actually I have sent a mail to vim mailing list and started working on this problem. When I checked my mails after solving the problem, I saw few good replies from vim mailing list pointing to these links tip#80 , yahoo_groups .

Gmake going in loops??

From many days, I have observed that our build sometimes goes into recursive loops and does not get out of the loop for many hours/days. I found the reason for this problem. It is about the modification time of makefile & C/C++ files. I use tar.gz which preserve file modification times and transfer that tar.gz to other systems. If those systems have different timezone, it might happen that the makefile/c/c++ files are modified in the future. Since gmake considers file modification times when running makefile, it goes in recursive loop. Simple solution is to touch the makefile/c/c++ files with future modification time using touch command.

Solaris:Make a shared library which can be executable

After many days, I got some free time to continue the work that I have done here.
In solaris, I know, by writing asm code, we can make a shared library work as an executable. So, I thought of writing a simple tool which can make necessary changes to shared library to act as an executable. First I have read cc man page to find out how to set an entry point. I found that we have to pass -e option to linker. So, I have written a test program [download] and compiled it to create a shared library. When executed it [$./libtest.so] , it gave segmentation fault and failed. It was setting entry point correctly. I have used elfdump to check whether it is setting entry point properly or not. We can see entry point with elfdump -p <executable> [e_entry value]. We can search for that entry point in ‘elfdump -s <executable> | grep <e_entry value>’. For reading the section headers, I have used the example given in elf man page. Then, I have executed that section header print program on that shared library. Then I have observed that the created shared library was not having ‘.interp’ section. ‘.interp’ contains interpreter value. After loading the file, system handles the control to interpreter, if ‘.interp’ section exists. So, at this stage, I thought of writing a program using elf/gelf library to add the interp section. But, when I read the man pages again, I saw -I option to linker to set the interp value. [By default, for executables, .interp section is created and /usr/lib/ld.so value is set. By default, For shared libraries, it wont create any interp section]. So, I have used -I option to compile my program into library. Now, entry point is correct and .interp section is also there. When executed my shared library [$./libtest.so], it executed normally without giving segmentation fault.

One more problem here is to get the command line arguments. I think, we have to write some assembly code to do that. If you want that feature, you can refer to this and this. you can download my programs here.

My experience with using GNU screen

In my B.Tech, we were taught about using screen command. But at that time, I did not realise the importance of that. Because, I used to open 1 connection to server, write code/compile/execute in that terminal only. But, now I open more than 5 connections. Last week, I read a small tutorial about screen and felt the importance of it. Screen is used for terminal multiplexing. We open only one terminal connection to server. In that terminal, we can use screen to create multiple virtual windows. Since we can detach/attach screens, we can also use it as remote desktop for linux.

In linux, screen is already installed. So, I have started using it directly. In my solaris machine, it is not installed. So, I have downloaded its package from sunfreeware site and installed it. When I started screen, I was not able to use backspace on command line. I could not find any clues on net. But, when I connected to that machine using ssh, that problem was solved. After that when I opened a file using vim in screen, I was not able to use arrow keys. Colors were also not according to default vim colorscheme. I have searched on internet.. but could not find any clues. I knew, it was the problem with TERM & TERMCAP. So, I have exited screen and started it again with TERM=screen screen. Then, I could use arrowkeys and colors were also fine. But at some places, vim was not showing code properly. Again.. I wasted lot of time for searching on the net. After getting frustrated with putty and terminal types, I connected to that solaris machine from my linux box and started screen with the same command [TERM=screen screen]. Luckily, then, everything in screen was working fine. Still, I dont know why I was getting problems if I directly connect to solaris from putty and use screen.

Stack traces in C/C++ programs in solaris & linux

In Java, there are direct APIs for getting stack traces. But, for C, there are no popular APIs for doing the same. So, I was just searching for this feature in linux. Then I came across this link. It provides very clean and simple interface for getting stack traces. They even gave the sample program in the end of the article, demonstrating how to use it. Note: We have to compile the program with -rdynamic flag. In solaris, I know, with dtrace tool, we can do all these tricks. But, if I want to get/use the stack trace inside the C program, dtrace doesn’t help. After searching for “stack” in /usr/include/*.h contents, I found some functions in ucontext.h, which are relevant to what I want. Then, after seeing man page of one of their functions [man printstack], it is confirmed. I have written a small program to demonstrate stack traces in C program. You can download it here. For C++ programs, we may have to use c++filt [like ./a.out | c++filt] to get the correct functions names from mangled ones.

Finding memory leaks in solaris

I have started searching for a tool like memusage library in solaris. Then, I read about umem library. I felt, this tool is useful for finding memory leaks for a running program. We have to use the umem with mdb whose interface is very difficult to use. Then, after further exploration on this topic, I have opened sunstudio gui and started debugging with ‘memory checks’ on. Then, I found out that sunstudio is internally using dbx. So, I have noted dbx commands that are used for memory leaks.

dbx is more like gdb. first, we have to build the executable [CC test.c]. then start that executable using dbx [dbx ./a.out]. set ‘memory checks option on’ [check -memuse]. run the executable [run]. If there are leaks in our application, we will get a table like the following

Total Size Num of Leaked Blocks Block Address Allocation call stack
========== ====== =========== ====================
8 1 0x80688a8 func2 < func1 < func < main

This table is saying that, it has got a memory leak of 8 bytes in main>func>func1>func2 function. From this, we can know that memory allocated in function func2 is not freed. Now, we can guess where that allocated memory can be freed.

Enjoy! Happy leak free code!!

Multi line greps

    In linux/solaris, we can use fgrep/grep for pattern matching in files. But one limitation with these commands is they limit their pattern matching to one line. They don’t search for the pattern in multiple lines. Then I found an open source tool, pcregrep [available as sdk also]. pcregrep provides multi-line grep. In many latest versions of linux and solaris machines, I have found this command, but they are older versions [maybe stable versions]. Older versions did not have multi-line grep functionality. So, I have downloaded its latest source and built it.

One usecase: I have function names, then I need to find out their declarations from header files. Function could be declared over many lines. This is my usecase. In this scenario, I have used pcregrep.

« Older entries