First of all - I just want to say that I really enjoy reading your column in Linux Magazine, and that the Knoppix CD is never far away for my work! Cheers!
I just read your response to the "Getting Online" letter in the May 2009 issue and I had a thought that could also be a factor. Because the reader can access Google and carry out a search, it seems as though the Internet access and DNS resolution is working. That probably means that the proxy is unlikely.
A common theme I've found lately when fixing other folks' computers is the MTU settings on the operating system and/or the router. Sometimes, a person's network or ISP warrants a lower MTU value, and as a consequence only certain websites and/or files are accessible.
Should the ISP not provide an automatic setup for PPP/PPPoE/WLAN/LAN on the server side if a different MTU is required? Using a lower MTU setting (ifconfig devicename mtu number) may indeed help in some environments if the receiving side gets confused by the default MTU. The default MTU (maximum transfer unit) is usually 1500, but for ISDN I have also seen values like 1450, which may or may not give a better data transfer rate because of protocol header encapsulations that fit into the "remaining" space of a unit. This is of course just an educated guess. But in any case, you are right, under certain circumstances, changing the MTU setting repairs otherwise inexplicably slow network performance.
I am by no means a pro at Linux but I do use it daily and enjoy it. I would like to get more in-depth with it and help to contribute to the open source community. Any words of wisdom or tips on small projects that would be interesting?
The Internet contains enough information about GNU/Linux and other free software that any level of in-depth understanding is accessible to any user who is willing to explore. Some places I would recommend for a start, where you can also share your own experience, are wikis and forums. But even more exciting than networking on the Internet is getting involved in the various projects consisting of real people, not all of them programmers, most of whom are just casual computer users with interests far beyond geek stuff.
You might want to look around for Linux user groups or projects that you would like to know more about. Even your favorite distribution might have its own user group, with the possibility to meet the people behind it and maybe become part of the community on a non-technical level. Just helping someone install GNU/Linux or helping someone to find the right place to look for help is a contribution most welcome. And each application has its own community; LXDE, KDE, GIMP - you name it - have communities that share and contribute knowledge and experience to improve and promote their favorite hobby, working tool, or passion.
Hi Klaus. I am a Detective and Computer Forensics Examiner in California. I have been using Linux for four years, including SUSE, Knoppix, and Helix, both at home and for work. I use an LCD TV for my monitor at home, and it only supports a few resolutions, including 1024x768 60Hz. With some Live distros, I have to pass boot commands to set the correct resolution.
I have the most current release of the Knoppix CD, 6.0.1, and the cheat codes that I am familiar with for previous releases do not seem to work. Are there boot commands for screen resolution and refresh rate in this release? If not, can I remaster the CD so the default resolution is what I need?
The Xorg server in Debian Lenny now uses the RandR extension by default, in which modelines and predefined resolutions in /etc/X11/xorg.conf are mostly ignored, and the "preferred" resolution from the graphics cards firmware is used.
To change this behavior, the old cheat codes, which only set the xorg.conf values for static resolutions, are insufficient. A new cheat code for falling back to the old behavior will be added in later versions, but for now, you can manually change /etc/X11/xorg.conf by disabling the "RandR" option in the ServerLayout section:
Section "ServerLayout" ... Option "RandR" "False" EndSection
For Knoppix, another option is to boot in framebuffer mode, either with the Knoppix-specific boot option "fb1024x768" or the more general "vga=791", which sets 1024x768 as the framebuffer resolution for text mode. Together with
in the "Device" section of /etc/X11/xorg.conf, you get a fixed resolution, which is slower than the native driver for your graphics cards chipset and does not support direct rendering or 3D but usually works well with TFT monitors.
I have an Asus motherboard with a JMicron controller and two hard drives on RAID 0. When I try to install Debian, it only sees the drives separately. I've searched the forums - lots of people have this issue - and I was unable to find an answer. I'd like to know whether it's possible to install Debian on motherboards with this controller when using RAID.
As far as I found out, your JMicron controller is a "SoftRAID" controller, which needs help from the operating system in the form of a driver to join multiple disks together as one large drive, as in RAID 0. Therefore, either you would need a specialized driver for this controller (i.e., a kernel module for Linux) or you would need to avoid using the SoftRAID function of the controller at all and switch to the better-supported software-only RAID instead. A real hardware RAID controller would automatically handle disks that have been defined as a RAID array by firmware/BIOS setup (independent of the running operating system), and the disks configured this way would always act as a single disk.
This means you cannot, or at least should not (because of possible data loss), install a dual-boot system with one operating system supporting SoftRAID (as opposed to software RAID) and the other seeing two separate disks.
If you want ONLY to run GNU/Linux with this controller, you can use software RAID to first partition two similar drives identically and then join their space via software RAID provided by the distribution's installation program.
I was wondering if you could help me on how to set up a domain on Linux. I am running Fedora Core 6 (which is the server) with all server packages installed, and I was wondering how you would be able to set one up because I am having a few difficulties making one.
I'm unsure whether you mean a DNS (Domain Name System, Internet standard) or a Windows "Domain."
In the DNS case, you have to run a nameserver grouping IP addresses of computers in a private domain name (such as host1.mydomain.local, host2.mydomain.local, etc.). Explaining how DNS works is surely beyond the scope of a simple answer, you can find more help at http://en.wikipedia.org/wiki/Domain_name_system.
For a Windows Domain, you need to run a Samba server (including the NetBIOS name service component), which defines mappings between IP addresses and a Windows "Domain name." Samba is configured via the configuration file /etc/samba/smb.conf. The Samba network service, which is traditionally used for sharing disk drives and printers, as well as user authentication data between computers running Linux and Windows, has its main homepage with all links to relevant information at http://www.samba.org/.
Greetings Klaus: I have two similar, but not identical, nVidia 680 motherboards, about two years old. On Live CD boot of the later distros (SUSE 11.1, the last two versions of Ubuntu, and now Knoppix 6 and 6.1), there appears to be a long slowdown on the sata_nv module during boot, causing these installers to find no SATA hard drive on either machine (no RAID on either box). What I've tried from forum speculators: jumpering the SATA drive to 1.5Gbps and defaulting the motherboard BIOS settings, pci=nomsi, brokenmodules=sata_nv, and sata_nv adma=0.
I don't need to tell you I can't install anything new on these mobos, and XFX (the nVidia brand) doesn't support this trouble. Yikes - I thought newer kernels were more compatible with recent hardware.
I have no better solution for this yet, other than replacing the computer with one with a working Linux-compatible controller as a warranty case (I know, probably not an option), waiting for newer kernels to fix the problem, trying a kernel with generic AHCI SATA support, or trying different boot options that can change interrupt handling on SATA, such as pci=bios, acpi=off, noapic, and nolapic.
First may I say that I admire all the work you have done on Knoppix. It is an excellent offering and I use it all the time. However I want to run it in VMware/VirtualBox on occasion and the kernel-headers are never available (to run guest additions etc.). I do an update and look for them but can never find the ones that match, just every other one going back to the abacus! Are they hidden somewhere, or does this task require a special procedure?
Your question is a good starting point for a general look into source availability for open source components. But first I'll answer your Knoppix-related question: Knoppix uses a plain kernel.org kernel, aufs patches for overlays aside, so you should be able to get the kernel source, same version as your kernel, from kernel.org and copy your /boot/config-* to .config in the unpacked kernel source directory in order to get the same configuration. Then you can compile additional modules with that kernel source. The DVD version of Knoppix already contains the kernel source under /usr/src/linux, which saves you some work.
In general, for all software released under the GPL, such as the Linux kernel, the GPL gives you the right to acquire the sources directly from wherever or whomever you received the binary software, regardless of source availability anywhere else on the Internet. Knoppix uses the GPL paragraph 3(b) possibility of providing the source on request directly from the main distributor, which saves a lot of Internet traffic for additional source CDs and duplicate code. On the other hand, it's very easy (and probably faster than postal service) to download the corresponding sources of every package, in all versions, directly from the programs' authors or the software packagers' websites; Debian offers a snapshot archive, http://snapshots.debian.net/, that provides a searchable database of packages and sources starting from 2005, which covers most software installed in Knoppix, except for some scripts that are already present in source form.
To Klaus, I presently have a laptop with an Intel CPU that came with XP. I have dual booted (using Grub, mostly) my computers with Linux since about 2000. It seems like there have been many improvements in software in the last 10 years, but I am still waiting two to three minutes to the login screen and another minute to start an application. I am the only person that uses my computers (do not have multi-users). I power up to use and power down when I am finished. My computer configuration (hardware and software) rarely ever changes, yet I wait for the same system checks to complete every time.
My question: Is there a reason that the start time has to stay so long? It seems that my bootloader could be where I not only select my OS but also the user. The user's configuration at last power-off could be saved to a secure place on the disk and used at next power-on for a streamlined start. A full computer startup could be selected if needed or used as default for those who like to wait. Will this wait be unchanged 10 years from now?
The traditional way to start a Unix system is by starting different system/disk checks and services one after the other. This way, it is very easy to identify and repair problems in the start procedure. Recently, Linux distros have started to parallelize some of the startup procedures, but it is not easy because there are so many different setups and hardware configurations that could require a specific sequence of prerequisite tasks to complete.
With known hardware and some optimization, you can start the desktop within about 20 to 30 seconds from the moment the kernel loads, but doing so requires a lot of in-depth knowledge of the boot procedure and the services that have to run.
Things that are easy for optimization: Identify services that you definitely don't need, such as (1) running your own mail server in an environment where you just receive and send mail via a providers' servers, (2) running NIS and other authentication services that you don't need, (3) starting database servers, while never really using them, or (4) starting web, FTP, NFS, Samba, and other servers that you don't need.
Some boot performance can be gained by optimizing the sequence of services as well - for example, starting the desktop at a very early stage, before the print server and network set up. Basically, this is what other operating systems do to make booting look fast, by starting the non-interactive stuff in the background while the visible stuff is already up. Traditionally, the graphical environment comes last in the GNU/Linux boot procedure; in fact, it can be started much earlier in the boot sequence once the basic hardware initialization and a set of network and internal services are done.
Another possibility would be configuring suspend-to-disk just to do the real boot procedure once and then saving the working session for later use. Usually this takes only about 10 to 20 seconds. But when hibernating, the hard disk content of mounted partitions and the general hardware configuration must not be changed before resuming from hibernate; otherwise, you will end up with a non-booting system or even lost data. This is the same problem with all operating systems that support suspend-to-disk - Linux is no exception.