Any chance you could start putting the LJ code examples on the web site, so I don't have to type the stuff in? Especially hairy stuff like the PGP patch on page 6 of the February '97 issue. Sheesh. Really, I think this would be great. —Peter Watkins Washington, DC peterw@clark.net
We think this is a good idea too, and starting with this issue, example code can be found at ftp://ftp.linuxjournal.com/lj/listings/issue##/. The files are tar, gzip files, one for each article, named article##.tar.gz. There will be a footnote to each article that has listings in it, giving you the article number.
My first question is, “What does CGI stand for?” The second question is, “Why did the editor/s never ensure that the abbreviation was defined at least once in the magazine?” This lack of definition of acronyms is very annoying to me since the computer world is so chock full of acronyms. Acronyms in this environment are also very context-sensitive—so much so that defining terms like this should be mandatory in every article published anywhere. —Mac Bowles Senior Software Engineer Lockheed Martin Astronautics kbowles@claven.mmc.den.com
First, CGI stands for Common Gateway Interface. Second, I agree that tossing acronyms around without defining them is annoying, and plan to be more careful in the future.
I have read the letter to Linux Journal from Mr. Jack McGregor, who is pushing usage of dumb terminals for schools (low cost). I am a Linux consultant. However, I am not pushing dumb terminals, but Linux-based X terminals using old 386 and 486 PCs. My experience with low end 486s demonstrates that indeed they are very fast and fun to use to operate major desktop packages (WordPerfect, Netscape, Applixware and StarOffice are a few I have tried). They can also run games (e.g., Doom).
I have customers who have turned to this solution not to save money but to gain raw speed. For example, one is using a dual Pentium Pro 200MHz with 192MB of RAM as a server for 10 users (C++ developers). Everything is in RAM all the time for all users. This beats any network when it comes to loading software or searching through directories and so on. In one case, the speedup they are getting by sharing the same server instead of the more standard Windows-to-server relation is close to a factor of 10.
While X is an old technology for some, Linux is making it into a revolution because of its low cost. —Jacques Gelinas jacques@solucorp.qc.ca http://www.solucorp.qc.ca/linuxconf/
I have been using the Slackware distribution 3.0 (1.2.13 kernel) for over a year. I wanted to upgrade to the 2.0 kernel, and decided that a new CD-ROM distribution would be convenient. After reading about Red Hat 4.0 in the LJ Readers' Choice Awards and an InfoWorld review that selected Red Hat 4.0 as one of the two best operating system releases in 1996 (the other was NT workstation), I decided to order.
My installations (about a dozen trials) were plagued with random segmentation faults, stack-dumps and reboots. The Red Hat support team responded as advertised and suggested hardware (i.e., CPU, RAM, cache, motherboard) was producing my problems. The Red Hat technician pointed me toward RAM or cache because I just had a brand-new motherboard installed, and he presumed that the motherboard or the brand-new cache was not a problem. Finally, the Red Hat technician suggested that:
“Red Hat is sometimes not able to run (for unknown reasons) on some hardware that will run Slackware.” (E-mail from Red Hat support.)
I had my RAM diagnosed by a local computer repair shop that has a hardware technician who is a also Linux guru. No problems were reported with the RAM, but the technician could not duplicate my installation symptoms.
Finally, a bit dazed and still suspecting the RAM, I purchased some extra RAM. I tried the installation one last time, using only the new RAM—I still could not successfully install Red Hat 4.0. Alas, I am back to the Slackware 3.0 and out $60 for the Red Hat.
I am truly disappointed that I cannot get Red Hat 4.0 working. It seems Red Hat has so much to offer new Linux users in terms of configuration, installation, etc. But, as Microsoft can attest, it will be difficult for any commercial distribution to support every PC configuration. My PC is the evidence.
All is not lost, though. As the web master for our software manufacturing firm, I take care of the Intranet Web pages. We need a new internal web server, and I am adamant that it runs on a Linux box. Maybe the Red Hat distribution can foot this bill. For my PC though, I am sticking with Slackware. —Jeffery C. Cann Software Engineer jcann@intersw.com
Cory Plock wonders why he can't find Linux distributions on the shelves of local software stores (LJ February 1997). Perhaps he's just living in a technologically repressed area. I just checked my nearest shopping-mall computer store here in Quebec City: They have the 6-CD InfoMagic package and the 4-CD Walnut Creek distribution, both up to date and competitively priced. A nearby store has several copies of two books on Linux. The distribution mechanisms must exist—encourage local dealers to use them. —Don Galbraith dsg@clic.net
Hi. I have been a long time reader of LJ and it has been a great help to me, and I am sure that applies to many in the Linux community. Now, my friends on the Net and I have also done something as a contribution to Linux which I thought would be interesting to you and helpful to your readers. This is to create an On-Line Linux Users Group for people interested in learning more about Linux, providing help to other Linuxers and promoting Linux. Our web address is: http://www.linuxware.com/. —Peter Lazecky peteri@linuxware.com
I am using the version of amd that comes with Red Hat 4.0. The NFS hosts are running SunOS 4.1.4 and Solaris. I found the suggestion that amd is relatively bug free incorrect. In the first few days of using it I found two important bugs. The first is that it confused the node name of one machine with the IP address of another machine. That is, I found directories from one machine under the name of the other in the /net directory. The second bug I have experienced several times. If a directory is unmounted, amd doesn't seem to know how to mount it for several minutes afterwards. Both of these bugs result in directories disappearing—including my home directory. It can make it very hard to justify using Linux when these major type problems exist.
As you can see, this is a big issue for me. I have seen several postings in dejanews referring to other problems with amd. I would like to see more support, but up to this point I have found very little in the way of answers to most questions about amd. Thanks for the article anyway. —David Uhrenholdt duhrenho@vette.sanders.com
I read Larry Wall's article on Perl on the way in to work today. My work involves C, Korn shell and Perl. I am convinced that Perl is a marvelous language. Mr. Wall's article supports that.
I understand the notion of creativity as a function of a large palette (...there's more than one way....) and the theory that “form follows function”. My conclusion is that Perl is unacceptable as a development tool because I cannot support it. It takes too long to discover (glean, figure out, guess at, puzzle out) which of the myriad possible methods was used by the original developer—even when that was me. I will continue to write Perl for fun and use more documentable, supportable languages for important systems.
I also wonder if Mr. Wall's writing would be a little more effective if he didn't attempt to be funny in every paragraph. He is the linguistic equivalent of the aggressive graphics that prevent many people from being able to read that very trendy San Francisco- based magazine on pop-wired culture. —Brandon Sussman #VATAcc70713@vantage.fmr.com
5 Feb 1997: I enjoyed the interesting article by Bob Stein on algorithms for deciding whether a point is in a polygon in your March issue (“A Point about Polygons”). It is too bad that Bob wasn't familiar with the algorithm used for this in the WN web server (see http://hopf.math.nwu.edu/), as it has some interesting similarities and differences when compared to the algorithm he describes. Like Stein's it uses integer (actually long int) arithmetic rather than floating point.
I first used this algorithm in a version of WN released in July of 1995. As with Stein's algorithm we start by translating, so the test point is at the origin. Instead of counting the parity (evenness or oddness) of the number of crossings with the positive Y-axis, the actual signed number of crossings is counted (I used the positive X-axis instead of the Y-axis, but that is immaterial). More precisely, the algorithm counts +2 if an edge crosses the positive X-axis with positive slope and -2 if it crosses with negative slope. If an edge ends on the positive X-axis, it gets a count of only +1 or -1 depending on the slope. If the edge lies entirely in the positive X-axis, it gets a count of 0. If the origin (test point) is actually on any edge, we declare that the point is in the polygon and quit. After all edges have been counted, we declare that the test point is outside the polygon only if the total count is zero.
The implementation in WN is about three times as long as Stein's implementation, largely because I wanted to get all the special cases right even if in practice they don't matter much. In particular, if the test point is on an edge, it is always declared in the polygon. Also polygons with only two sides or degenerate polygons (like points) work properly.
There is one very big difference in the way the two algorithms behave when the polygon is not simple (i.e. crosses itself). Imagine a five-pointed star drawn in the usual way without lifting your pencil from the paper. With the even/odd count only points in the five triangular “tips” of the star will be considered inside while points in the pentagonal central region will be considered outside. The WN algorithm, on the other hand, will count all these points, tips and center, as “inside”. This is, in fact, the reason I chose this method rather than the even/odd count.
The reason this difference occurs is not too difficult to understand. Imagine the polygon is a stretched rubber band held in place on a table with thumb tacks at each vertex. At the test point we erect a vertical post perpendicular to the table. Now remove all the tacks and let the rubber band contract into the post. It may wrap around the post some number of times positively or negatively (i.e., counter-clockwise or clockwise) or it may not be hooked on the post at all. The even/odd algorithm is counting whether the number of times it wraps around is even or odd, while the WN algorithm is counting the full number.
If the polygon does not cross itself the actual number can only be 0, +1, or -1, so the even/odd algorithm works fine. With the five-pointed star, if the post is put in the central region, the rubber band goes around twice, and so the algorithms give different answers.
If anyone is interested in the WN implementation, just download the distribution and in the file wn/image.c look for the functions dopoly() and segment(). The distribution can be found at http://hopf.math.nwu.edu/. —John Franks john@math.nwu.edu