Two weeks of trying out OLPC development environments in my spare time is finally starting to have some visible results.
I did not enjoy exploring OLPC software in an emulator and wanted to install it on a partition of a (slow) spare pc. It has a via C3 chip. The instructions are here: http://seth.anil.googlepages.com/olpconapc
Sunday, September 16, 2007
Friday, August 3, 2007
Programming Lessons on Western Railway - Multiple threads and performance
Western Railway in Mumbai has 4 tracks - slow Up, slow Down, fast Up and fast Down. In order to provide a fast service to passengers in various areas, Western railway has some trains which are slow between Borivli and Andheri and fast between Andheri and Churchgate. There are trains which start or terminate at Andheri and are slow between Andheri and Bandra.
Conditions of passengers at intermediate stations can be very difficult; so, Western Railway has special trains which, e.g. start at Goregaon and are slow up to Andheri and fast after that.
A programmer will see an obvious design concern. These threads will need to be synchronized. Will there be a problem? Every day and night, I experienced the problem. The train would halt and we wait. The design problem was even more severe. To switch between slow and fast tracks, the train had to cross a track with traffic in the opposite direction. More the traffic as at peak hours, worse the synchronization delays.
My brain had nothing else to do during the seemingly endless delays except dream of a different design. If the mind was occupied, the claustrophobia of being surrounded by a mass of humanity seemed less.
My ideal solution - in another post.
Conditions of passengers at intermediate stations can be very difficult; so, Western Railway has special trains which, e.g. start at Goregaon and are slow up to Andheri and fast after that.
A programmer will see an obvious design concern. These threads will need to be synchronized. Will there be a problem? Every day and night, I experienced the problem. The train would halt and we wait. The design problem was even more severe. To switch between slow and fast tracks, the train had to cross a track with traffic in the opposite direction. More the traffic as at peak hours, worse the synchronization delays.
My brain had nothing else to do during the seemingly endless delays except dream of a different design. If the mind was occupied, the claustrophobia of being surrounded by a mass of humanity seemed less.
My ideal solution - in another post.
Daydreaming - I wish I were a Superman
I read in the morning paper that a SUV didn't stop for a cop and carried him a couple of kilometers. The cop is seriously injured. On the inside page, the story is that a cop is suspended for asking for a bribe for a traffic violation.
As I am crossing the road, a lunatic runs through the red light. I scramble across the road but how I wish I were Superman. I would have stood still and enjoyed the look on the driver's face as his car would have been crushed like a piece of paper after the collision.
Well, even if we can't do anything about it, at least we can daydream:)
As I am crossing the road, a lunatic runs through the red light. I scramble across the road but how I wish I were Superman. I would have stood still and enjoyed the look on the driver's face as his car would have been crushed like a piece of paper after the collision.
Well, even if we can't do anything about it, at least we can daydream:)
Friday, July 13, 2007
Lesson in Programming on Western Railway - Capacity problems, look beyond compression
A general manager of Western Railway made the effort to quickly increase the capacity. He introduced train coaches which had very few seats and, hence, a much larger standing capacity. It seemed reasonable that many more people could fit into each coach. The coach looked like a typical metro or subway train coach with seats on the sides and standing room in the middle. Except that these coaches were much wider and standing in them in crowded conditions was a torturous experience.
The congestion on the platform did not get any less. I avoided such a train, preferring to wait. The GM carried out a survey in which people said that they would accept such coaches if they were air conditioned.
The coaches were not airconditioned, rather they were withdrawn. I am pretty sure people, especially children, could have suffocated in such coaches. It would no longer have been lossless compression.
The congestion on the platform did not get any less. I avoided such a train, preferring to wait. The GM carried out a survey in which people said that they would accept such coaches if they were air conditioned.
The coaches were not airconditioned, rather they were withdrawn. I am pretty sure people, especially children, could have suffocated in such coaches. It would no longer have been lossless compression.
Wednesday, July 11, 2007
Lesson in Programming on Western Railway - Don't add a feature till needed
I had to go to a formal meeting and dressed in a tie. The train in Borivli was crowded as usual but getting down at Bandra was worse than I expected.
The rush of people trying to get in made exiting a very difficult exercise. I got down but my tie got entangled. Fortunately, only my shirt and tie were twisted out of shape.
It was, but, a valuable lesson. After that, the tie was always in my pocket and not on my neck till I was out of the local train.
The habit became so strong that I try not to program even a line till it is needed.
The rush of people trying to get in made exiting a very difficult exercise. I got down but my tie got entangled. Fortunately, only my shirt and tie were twisted out of shape.
It was, but, a valuable lesson. After that, the tie was always in my pocket and not on my neck till I was out of the local train.
The habit became so strong that I try not to program even a line till it is needed.
Tuesday, July 10, 2007
Lesson in Programming on Western Railway - Exception Handling
Even after over 30 years, some of the memories of traveling in Western Railway of Bombay suburban are very vivid. Some of these experiences have made me conscious of a number of critical programming concepts.
For example, exception handling is a must. If you can't handle a problem, pass it to someone else who can.
I was returning home and crowd was the usual size. We try to rush in and, hopefully, into the compartment so that we are not hanging from the door.
I try to get in from the left side of a pole in the center of the door. I suddenly find that a fellow passenger is trying to get in from the right side of the pole. Nothing wrong with it except that his arm is going around my neck. Neither can get on board and the harder he tries, the worse my condition.
To this day I cannot figure out how such a configuration occurred; but many problems in a multi-threaded application do not make any sense either. For a few seconds, I was sure that this was the absurd end to my life. However, other passengers realizing the deadlock, made the other passenger release his grip and I could board the train and then so could he.
I did not get an apology. Did not expect it either. I was just grateful for the release.
For example, exception handling is a must. If you can't handle a problem, pass it to someone else who can.
I was returning home and crowd was the usual size. We try to rush in and, hopefully, into the compartment so that we are not hanging from the door.
I try to get in from the left side of a pole in the center of the door. I suddenly find that a fellow passenger is trying to get in from the right side of the pole. Nothing wrong with it except that his arm is going around my neck. Neither can get on board and the harder he tries, the worse my condition.
To this day I cannot figure out how such a configuration occurred; but many problems in a multi-threaded application do not make any sense either. For a few seconds, I was sure that this was the absurd end to my life. However, other passengers realizing the deadlock, made the other passenger release his grip and I could board the train and then so could he.
I did not get an apology. Did not expect it either. I was just grateful for the release.
Labels:
Bombay suburban,
exception handling,
Programming
Wednesday, June 27, 2007
Overcoming a powerfailure in the middle of an upgrade
Upgrading a distribution invariably introduces new challenges. In this case, it resulted from taking an absurd risk of upgrading during the summer in the midst of power failures. Our area in Chandigarh does not get many cuts, but still...
After upgrading about half the packages, the power failure occurred. UPS batteries did not last long enough. I restarted the upgrade and, to my relief, found that the machine still booted and allowed me to upgrade the distribution. It upgraded only the packages which had not already been done.
The unpleasant discovery came after the upgrade was successfully done and I decided to apply the available updates. There were problems of conflicts in files because many of the FC6 packages were still installed.
The net helped in understanding that this was a consequence of yum crashing in the middle of a transaction. However, manually fixing about 600 packages was a pain. So, I enjoyed myself and wrote a python script to clean up the mess.
The script follows, in case anyone else ever needs it.
#!/usr/bin/python
# Power failure during FC7 upgrade results in duplicate entries in rpmdb
# This program will create a file 'deleteList.txt'
# containing duplicate rpm's which may be deleted using
# rpm -e `cat deleteList.txt`
# Depending upon the number of packages to be deleted, it can take time.
# Anil Seth, Jun 2007.
import rpm
NEW_DISTRIBUTION='Red Hat (FC-7)'
REL_SUFFIX='fc7'
ARCHS=['x86_64','i386','i686']
def chk_dups(pkgs,arch):
""" find the duplicates for a given architecture
by looking at the distribution (4th element in list - index 3)
or by checking the suffix of release(2nd element in list).
In case the above strategy does not find a new package,
select the one with the highest version(1st element in list)
Returns 2 lists - new packages and remaining packages.
We expect, but do not require, one item in each list.
Returns None if there are no duplicates.
"""
dup_pkgs = filter(lambda x: x[2] == arch, pkgs)
if len(dup_pkgs) > 1:
newPkg = filter(lambda x: x[3] == NEW_DISTRIBUTION or REL_SUFFIX in x[1], dup_pkgs)
restPkg = filter(lambda x: not(x[3] == NEW_DISTRIBUTION or REL_SUFFIX in x[1]), dup_pkgs)
if len(newPkg)==0:
max_version = max([x[0] for x in dup_pkgs])
newPkg = filter(lambda x: x[0] == max_version, dup_pkgs)
restPkg = filter(lambda x: x[0]!= max_version,dup_pkgs)
return newPkg,restPkg
else:
return None
def delete_duplicates(ts,dups):
""" convert the items in dups list into package names suitable for erasing
It is written in a file
We could use ts.addErase(rpmname), ts.check(), ts.order() & ts.run() to
delete the packages through the program. Hence, ts is being passed as a parameter.
dups is a pair of lists of which the second is one for deletion
"""
f=open('deleteList.txt','w')
for name in dups:
for rpm in dups[name][1]:
rpmname = name[0] + '-' + rpm[0] + '-' + rpm[1] + '.' + rpm[2]
f.write(rpmname + '\n')
f.close()
print '''Now as root, run
rpm -e `cat deleteList.txt` '''
def main():
""" Iterate over the rpm data base, creating a dictionary
with name as the key with the value being a list of attr which is a list package attributes
Duplicates need to be checked for each architecture separately.
Hence, we create a dictionary with (name,arch) pair as the key.
The values are the two lists returned by chk_dups.
We returned the list of new packages in case
we wanted to verify the installation of these packages. Not being done.
"""
ts = rpm.TransactionSet()
mi=ts.dbMatch()
packages = {}
for hdr in mi:
name = hdr['name']
attr=[hdr['version'],hdr['release'],hdr['arch'],hdr['distribution']]
if name in packages:
packages[name].append(attr)
else:
packages[name]= [attr]
duplicates = {}
for name in packages:
for arch in ARCHS:
dups = chk_dups(packages[name],arch)
if dups:
duplicates[(name,arch)] = dups
delete_duplicates(ts,duplicates)
main()
After upgrading about half the packages, the power failure occurred. UPS batteries did not last long enough. I restarted the upgrade and, to my relief, found that the machine still booted and allowed me to upgrade the distribution. It upgraded only the packages which had not already been done.
The unpleasant discovery came after the upgrade was successfully done and I decided to apply the available updates. There were problems of conflicts in files because many of the FC6 packages were still installed.
The net helped in understanding that this was a consequence of yum crashing in the middle of a transaction. However, manually fixing about 600 packages was a pain. So, I enjoyed myself and wrote a python script to clean up the mess.
The script follows, in case anyone else ever needs it.
#!/usr/bin/python
# Power failure during FC7 upgrade results in duplicate entries in rpmdb
# This program will create a file 'deleteList.txt'
# containing duplicate rpm's which may be deleted using
# rpm -e `cat deleteList.txt`
# Depending upon the number of packages to be deleted, it can take time.
# Anil Seth, Jun 2007.
import rpm
NEW_DISTRIBUTION='Red Hat (FC-7)'
REL_SUFFIX='fc7'
ARCHS=['x86_64','i386','i686']
def chk_dups(pkgs,arch):
""" find the duplicates for a given architecture
by looking at the distribution (4th element in list - index 3)
or by checking the suffix of release(2nd element in list).
In case the above strategy does not find a new package,
select the one with the highest version(1st element in list)
Returns 2 lists - new packages and remaining packages.
We expect, but do not require, one item in each list.
Returns None if there are no duplicates.
"""
dup_pkgs = filter(lambda x: x[2] == arch, pkgs)
if len(dup_pkgs) > 1:
newPkg = filter(lambda x: x[3] == NEW_DISTRIBUTION or REL_SUFFIX in x[1], dup_pkgs)
restPkg = filter(lambda x: not(x[3] == NEW_DISTRIBUTION or REL_SUFFIX in x[1]), dup_pkgs)
if len(newPkg)==0:
max_version = max([x[0] for x in dup_pkgs])
newPkg = filter(lambda x: x[0] == max_version, dup_pkgs)
restPkg = filter(lambda x: x[0]!= max_version,dup_pkgs)
return newPkg,restPkg
else:
return None
def delete_duplicates(ts,dups):
""" convert the items in dups list into package names suitable for erasing
It is written in a file
We could use ts.addErase(rpmname), ts.check(), ts.order() & ts.run() to
delete the packages through the program. Hence, ts is being passed as a parameter.
dups is a pair of lists of which the second is one for deletion
"""
f=open('deleteList.txt','w')
for name in dups:
for rpm in dups[name][1]:
rpmname = name[0] + '-' + rpm[0] + '-' + rpm[1] + '.' + rpm[2]
f.write(rpmname + '\n')
f.close()
print '''Now as root, run
rpm -e `cat deleteList.txt` '''
def main():
""" Iterate over the rpm data base, creating a dictionary
with name as the key with the value being a list of attr which is a list package attributes
Duplicates need to be checked for each architecture separately.
Hence, we create a dictionary with (name,arch) pair as the key.
The values are the two lists returned by chk_dups.
We returned the list of new packages in case
we wanted to verify the installation of these packages. Not being done.
"""
ts = rpm.TransactionSet()
mi=ts.dbMatch()
packages = {}
for hdr in mi:
name = hdr['name']
attr=[hdr['version'],hdr['release'],hdr['arch'],hdr['distribution']]
if name in packages:
packages[name].append(attr)
else:
packages[name]= [attr]
duplicates = {}
for name in packages:
for arch in ARCHS:
dups = chk_dups(packages[name],arch)
if dups:
duplicates[(name,arch)] = dups
delete_duplicates(ts,duplicates)
main()
Do we need new versions of distributions
Having upgraded to Fedora 7, do I or my parents notice any difference?
Fortunately, my parents have not noticed any difference and that is the way we would have liked it.
Had the upgrade made any changes to the way my parents worked, I would have had a problem in helping them with their work. So, the question comes, why upgrade and what do we expect in the upgrades?
If we got a new computer, we would have no choice but to upgrade. If we need newer versions of some programs, it would be easier on a recent distribution. It is easier for me that if my parents' computer is on the same version as mine so that in case they have a problem, I can simulate it.
Do I want to spend a day every 6 months upgrading? Am I looking forward to the annual coordinated release of the new version of Eclipse? That is the news which triggered this thought.
Linux kernel now appears to be following a new path. There is no new version in site. There is no reason that Fedora cannot follow the same method. With the separation between core and everything else gone, there isn't even a question of deciding what goes into the core.
Instead of a new distribution, it would be nice to focus on new ways of upgrading a distribution. There could be packages which are installed but rarely used so we may not wish to upgrade them unless we ask for it or the upgrade of another package breaks this one.
The upgrade system should not pester us about upgrades available for a package which we have installed but never used. If I am using, say, xpdf, and evince is introduced as a preferred product by the Fedora community, the upgrade system could perhaps offer the option of upgrading to an alternate product. Only once - unlike the telco's.
Fortunately, my parents have not noticed any difference and that is the way we would have liked it.
Had the upgrade made any changes to the way my parents worked, I would have had a problem in helping them with their work. So, the question comes, why upgrade and what do we expect in the upgrades?
If we got a new computer, we would have no choice but to upgrade. If we need newer versions of some programs, it would be easier on a recent distribution. It is easier for me that if my parents' computer is on the same version as mine so that in case they have a problem, I can simulate it.
Do I want to spend a day every 6 months upgrading? Am I looking forward to the annual coordinated release of the new version of Eclipse? That is the news which triggered this thought.
Linux kernel now appears to be following a new path. There is no new version in site. There is no reason that Fedora cannot follow the same method. With the separation between core and everything else gone, there isn't even a question of deciding what goes into the core.
Instead of a new distribution, it would be nice to focus on new ways of upgrading a distribution. There could be packages which are installed but rarely used so we may not wish to upgrade them unless we ask for it or the upgrade of another package breaks this one.
The upgrade system should not pester us about upgrades available for a package which we have installed but never used. If I am using, say, xpdf, and evince is introduced as a preferred product by the Fedora community, the upgrade system could perhaps offer the option of upgrading to an alternate product. Only once - unlike the telco's.
Monday, June 25, 2007
Why can't NFS export be written
Upgrading to Fedora 7 created an unexpected problem.
An NFS export which was write enabled kept saying that it was a read only file system.
It seemed to be a SELinux issue and after failing to resolve it, just switched SELinux to permissive mode.
An NFS export which was write enabled kept saying that it was a read only file system.
It seemed to be a SELinux issue and after failing to resolve it, just switched SELinux to permissive mode.
Upgrading to Fedora 7 on a machine without cd/dvd
I found the following post by Carson very interesting and promising:
http://www.ioncannon.net/linux/68/upgrading-from-fc6-to-fedora7-with-yum/
On my main system, I installed F7 via the dvd. A second and third systems do not have a cd/dvd drive.
I decided to use the above instructions; however, I did not wish to install from net online. So, some of the instructions were modified:
On a server with a dvd drive :
1. Use the keep cache option on.
2. There will be a cache/yum/fedora directory. Copy all the rpm files
from the dvd into the /var/cache/yum/fedora/packages directory.
3. Export /var/cache/yum directory using nfs with root no-squash and writeable.
4. On the second machine, mount the above directory. Make suitable changes in the /etc/yum.conf. Automount is very useful.
5. Using rpm, update fedora-release and fedora-release-notes.
6. Run yum update
It will still need a net connection but will use it only when needed.
E.g. what was installed earlier from the extras repository will be downloaded.
In my case, it took much longer than it would have taken me to physically move the dvd drive but that would not have been fun.
One problem which gave me a fair amount of trouble was that I had downloaded stuff from freshrpms and livna. When both repositories were active, I had dependency problems. I enabled livna and disabled freshrpms, the problems were resolved.
An interesting problem was that the new kernel was panicking as it can't find root. Fortunately, the system worked fine with the earlier kernel. Carson had emphasised labels. Applying labels and making appropriate changes is the grub.conf and fstab file resolved even this issue.
A major advantage of the above scheme is that we can upgrade a machine without stopping normal activity. Machine can continue to be used though I am not sure if all applications will work properly during the transition.
And I look forward to a time when a distribution will be continuously evolving - never needing a major, disruptive upgrade.
An interesting comment on the matter of scale:
One of the comments on the above post: "depending on whether you see a 2 hour download as a problem or not :D"
For me even with the so-called broadband, downloading even the extra packages took longer!
Beats me why bsnl and other ISP's do not mirror these sites?
http://www.ioncannon.net/linux
On my main system, I installed F7 via the dvd. A second and third systems do not have a cd/dvd drive.
I decided to use the above instructions; however, I did not wish to install from net online. So, some of the instructions were modified:
On a server with a dvd drive :
1. Use the keep cache option on.
2. There will be a cache/yum/fedora directory. Copy all the rpm files
from the dvd into the /var/cache/yum/fedora/packages directory.
3. Export /var/cache/yum directory using nfs with root no-squash and writeable.
4. On the second machine, mount the above directory. Make suitable changes in the /etc/yum.conf. Automount is very useful.
5. Using rpm, update fedora-release and fedora-release-notes.
6. Run yum update
It will still need a net connection but will use it only when needed.
E.g. what was installed earlier from the extras repository will be downloaded.
In my case, it took much longer than it would have taken me to physically move the dvd drive but that would not have been fun.
One problem which gave me a fair amount of trouble was that I had downloaded stuff from freshrpms and livna. When both repositories were active, I had dependency problems. I enabled livna and disabled freshrpms, the problems were resolved.
An interesting problem was that the new kernel was panicking as it can't find root. Fortunately, the system worked fine with the earlier kernel. Carson had emphasised labels. Applying labels and making appropriate changes is the grub.conf and fstab file resolved even this issue.
A major advantage of the above scheme is that we can upgrade a machine without stopping normal activity. Machine can continue to be used though I am not sure if all applications will work properly during the transition.
And I look forward to a time when a distribution will be continuously evolving - never needing a major, disruptive upgrade.
An interesting comment on the matter of scale:
One of the comments on the above post: "depending on whether you see a 2 hour download as a problem or not :D"
For me even with the so-called broadband, downloading even the extra packages took longer!
Beats me why bsnl and other ISP's do not mirror these sites?
Sunday, May 13, 2007
Why do we suspect anything new?
A couple of weeks ago, I was trying to use Jython with JasperReports. Jython kept ignoring the classpath.
I suspected the beta version of Jython and then JDK 6. Only then, I noticed that the jython shell script in the distribution specifically manipulated the classpath.
Even if I am the only user, I prefer to install any application in a common location, like /usr/local. This also created a problem for jython. It couldn't create a cache directory and failed to find the required classes even though they were available on the classpath. Should't jython create the cache directory in the user's home?
I still can't understand why do we overlook signs of problems and suspect anything new.
I suspected the beta version of Jython and then JDK 6. Only then, I noticed that the jython shell script in the distribution specifically manipulated the classpath.
Even if I am the only user, I prefer to install any application in a common location, like /usr/local. This also created a problem for jython. It couldn't create a cache directory and failed to find the required classes even though they were available on the classpath. Should't jython create the cache directory in the user's home?
I still can't understand why do we overlook signs of problems and suspect anything new.
Tuesday, March 27, 2007
Two disks, no CD drive and a Disk Crashes
Both disks were bootable. One failed. How hard can it be even though the system had no cd drive?
I removed the failed hard disk and it just would not boot. The system insisted on mounting the file systems on the disk which was no longer present.
It allowed me to go into maintenance mode and fix the problem. But the root file system was read only. No matter how hard I tried, I could not fix /etc/fstab.
I am sure that there will be an option in grub or somewhere which would have helped me. However, that is not a very useful option for a person who now needs the help of the dir command to program in Python.
I had to disconnect the cd drive from another system, connect it to this one. Boot in recovery mode, fix the fstab file, etc.
I am not about to buy another cd drive. However, having finally understood the automount, not a single partition will be mounted in fstab unless absolutely essential.
This does bring up the question - why is automount not used more often? Any drawbacks?
Something more to learn.
I removed the failed hard disk and it just would not boot. The system insisted on mounting the file systems on the disk which was no longer present.
It allowed me to go into maintenance mode and fix the problem. But the root file system was read only. No matter how hard I tried, I could not fix /etc/fstab.
I am sure that there will be an option in grub or somewhere which would have helped me. However, that is not a very useful option for a person who now needs the help of the dir command to program in Python.
I had to disconnect the cd drive from another system, connect it to this one. Boot in recovery mode, fix the fstab file, etc.
I am not about to buy another cd drive. However, having finally understood the automount, not a single partition will be mounted in fstab unless absolutely essential.
This does bring up the question - why is automount not used more often? Any drawbacks?
Something more to learn.
It's simple after one knows
I have a small network at home and have been wondering about the easiest way to ensure that the packages and cached updates are easily shared.
I prefer Fedora simply because I am comfortable with it. I have a local repository from the downloaded cd's, have set the Keep Cache option in yum.
NFS seemed the most convenient way to share but mounting these exports at boot time was not viable. The 'server' may not be up. Manual mounting is irritating. Automount is obviously the solution but I had not tried it for years.
Once I realized that the first entry in the sample auto.misc was for an NFS file system, in spite of the server name being 'ftp...', the rest was trivial.
I exported the local repository as a read only file system and the yum cache directory as read-write with no root squash.
I can't recall why I had given up on automount a few years ago. Quite possibly there was no access to google to help me over the minor hurdles.
I prefer Fedora simply because I am comfortable with it. I have a local repository from the downloaded cd's, have set the Keep Cache option in yum.
NFS seemed the most convenient way to share but mounting these exports at boot time was not viable. The 'server' may not be up. Manual mounting is irritating. Automount is obviously the solution but I had not tried it for years.
Once I realized that the first entry in the sample auto.misc was for an NFS file system, in spite of the server name being 'ftp...', the rest was trivial.
I exported the local repository as a read only file system and the yum cache directory as read-write with no root squash.
I can't recall why I had given up on automount a few years ago. Quite possibly there was no access to google to help me over the minor hurdles.
Subscribe to:
Posts (Atom)