I was going to praise your magazine with printing code that I could read
in the article on containers. (Okay, I am a little behind in my reading.)
But, then I go to At the Forge, and I have code in a font so small that
even when I enlarge it, it's too small. Please help an old man's eyes.
—
Doug Broadie
Trying to fit code into a readable, not word-wrapped format that is flexible between devices is surprisingly difficult. It's a challenge every month for our layout person. Although I'm sure there will be some articles that are frustrating with particularly long lines of code, we'll keep trying to make it all readable—or as readable as possible!—Shawn Powers
Considering the vulnerabilities that have become known
recently in OpenSSL, please publish an article on OpenSSL
vs. LibreSSL, and when LibreSSL may be ready for prime
time. I did find this article that is about a year old:
https://blog.hboeck.de/archives/851-LibreSSL-on-Gentoo.html.
—
Richard
Thanks for the tip. Hopefully a contributor will grab your idea and give it a go.—Shawn Powers
I just wanted to thank Joey Bernard for the great introduction to
SymPy [see Joey's article in the July 2015 Upfront section]. It always give me a kick to read (and sometimes replicate) on my
cobbled-together Linux box what once was the demesne of secret government
research facilities. As more and more emphasis in placed on reproducible
research rather than on splashy headlines, these types of articles will
be invaluable.
—
Kwan L. Lowe
Joey Bernard replies: It is always great to hear from people who get some use from the articles I write. In my day job, I am constantly pushing on researchers the importance of reproducible research, so consider me a kindred spirit.
Here's a follow-up comment regarding Gary Artim's letter and Shawn Powers' reply in the July 2015 issue.
Although find's exec option can be handy, I would respectfully disagree that it's “probably even more efficient” than xargs. As I understand it, the user-supplied command is invoked once for every file that matches find's criteria.
Each command invocation involves overhead: program initialization, option processing, cleanup and so on.
For commands that support multiple files (such as rm, tar, zip, grep and so on), the xargs approach is better. The filenames are collected into a list, and the user-supplied command is invoked a minimal number of times. The overhead is minimized.
For small numbers of matching files, the exec option won't cause any noticeable delay. But, if the user-supplied command is non-trivial, or if there are large numbers of matching files, the xargs approach can avoid a lot of wasted processing time.
As a followup article, I would suggest discussing how to use a
find-xargs
pipeline to execute commands where the file list is not the last item on
the command line (for example, copying or moving a set of files to a destination
directory). The xargs command is supposed to support this, but in my
experience, it's quite fragile. Maybe I'm missing something.
—
Chris
Hmm, I'll have to look into this and see if I can make an interesting article out of it. Thanks for the tip!—Shawn Powers
Regarding Dave Taylor's article “When Is a Script Not a Script” in the May 2015 issue: here's just an alternate way to text message, not requiring mail setup, using curl:
#!/usr/bin/bash while [ 1 ] do pingout=$(ping -c 1 textbelt.com) STATUS=$? if [ "$STATUS" -eq "0" ]; then break else sleep 5 fi done ipaddress="$(/usr/bin/hostname -i)" /usr/bin/curl http://textbelt.com/text ↪-d number=9999999999 -d "message=$ipaddress" > /var/log/rc.local.log 2>&1 & exit 0
Cheers! I enjoy your column.
—
Gary Artim
As always, it's a pleasure to read and so much to learn. The only typo I
spotted in the July 2015 issue was on page 38 with Dave Taylor's
“Working with Functions: Tower of Hanoi”,
as I am sure everyone will mention. The solution is (2**n)-1 and not
(2**n)+1 as printed.
—
John
Dave Taylor replies: By George, I think you're right!
In the July 2015 issue, Dave Taylor writes, “The biggest limitation with shell functions is that they can return an integer value only of 0–127, where a typical script actually utilizes the 0 = false or failure, 1 = true or success”, and the above is even repeated as a big banner across the top of the page 37.
These claims are quite easy to verify. Let's try the following shell script:
check () { return $1 } for x in 254 255 256 257 258 ; do check $x ; echo $? done if check 0 ; then echo "A" ; fi if check 1 ; then echo "B" ; fi
If you take the quoted statement at face value, you will likely be
quite surprised by results of running this script with the help of a few
different shells (say bash, dash, zsh, ksh) accepting the syntax. What
you will see may be somewhat different in places but not in a material way.
—
Michal Jaegermann
Dave Taylor replies: Thanks for writing in, Michal. Of course, a function can return a numeric value in the shell, and that value can be interpreted as the calling script desires. My point was more that function return values are constrained by the simple integer range—whether it's 0–127 or 0–65536—and so I generally just ignore return values from functions entirely and use global variables instead. If we were in any more sophisticated programming language, of course, functions could return any number of complex data types and sidestep the limitation.
Thanks very much for the Docker articles in the June 2015 issue of
LJ!
They were quite a helpful primer. [See Shawn Powers' “Doing Stuff
with Docker” and Federico Kereki's “Concerning Containers'
Connections: on Docker Networking”.]
—
Erik
You're very welcome! I'm a big fan of breaking down complicated or scary topics into the basics. It's mainly because I know that awkward, embarrassed feeling of being in a conversation with others about a new technology (such as Docker) and not having any clue what they're talking about. Information wants to be free! (Okay, I'll get off my soapbox. Thanks again for the kind words.)—Shawn Powers
In the July issue of LJ, Zack Brown's “diff
-u” article discusses the efforts to extract part of the
kernel code into libraries. He likens this to the debate about micro versus
monolithic kernels. I thought that the current debate was about making
libraries that could be used elsewhere (and/or substituted) but still
running in kernel space. I understand that Linus Torvalds is enthusiastic
to push stuff to user space, but I don't believe that was directly related
to the current discussion of the “library-ification” of kernel
code.
—
Ray Foulkes
Zack Brown replies: Thanks for the letter! You're right that current efforts to librarify the kernel don't make it more of a microkernel and are not part of that debate.
But, I wanted to link the two, because I feel they attempt to achieve similar things. In a microkernel, as you say, the user can swap out chunks of the system that inhabit user space, replacing them with new or experimental implementations.
The library-ification effort is similar; although in the case of libraries, the control is given at compile time rather than at runtime. The user may replace one subsystem with another in a relatively orderly way, using a known library interface.
It's this orderliness that for me has a similar feel to a microkernel approach. Both cases rely on clear interfaces that provide the ability to replace whole subsystems, without requiring a deep rewrite of adjoining kernel code.
So I don't mean to say that the library-ification effort is part of an actual switch to a microkernel design. But—and now giving vent to pure speculation—I do think that library-ification could someday make it easier for Linux to support “hotswapping” arbitrary subsystems in a running kernel (at which point Linux truly might start to resemble a microkernel in practice).