This unix and linux site aims to provide book reviews and free ebook on unix linux, unix commands, unix shell, unix programming, unix shell scripting, unix tutorial, suse linux, rehat linux, debian linux, slackware linux, linux server, linux commands, fedora linux, linux gui, linux networking, unix time sharing concepts, programming linux games, samba-3, motif programming, unix signal programming, and linux complete reference, etc

Red Hat Certified Engineer (RHCE) Version 3.0.0

This study guide will help you to prepare for Linux/Unix Exam RH300, Red Hat Certified Engineer. Exam topics include Hardware and installation, configuration and Administration, Kernel Services, Networking Services, X window System, Security, Routers, Firewalls, Clusters and Troubleshooting. The exam has three components: Debug (2.5 hrs), Multiple Choice (1 hr) and server install and network services setup (2.5 ).

Slackware Linux Essentials

By David Cantrell, Logan Johnson and Chris Lumens
The Slackware Linux operating system is a powerful platform for Intel-based computers. It is designed to be stable, secure, and functional as both a high-end server and powerful workstation.
This book is designed to get you started with the Slackware Linux operating system. It's not meant to cover every single aspect of the distribution, but rather to show what it's capable of and give you a basic working knowledge of the system.
As you gain experience with Slackware Linux, we hope you find this book to be a handy reference. We also hope you'll lend it to all of your friends when they come asking about “that cool Slackware Linux operating system you're running”.
While this book may not an edge-of-your-seat novel, we certainly tried to make it as entertaining as possible. With any luck, we'll get a movie deal. Of course, we also hope you are able to learn from it and find it useful.
And now, on with the show.

Linux Configuration and Installation

By Patrick Volkerding, Kevin Reichard and Eric Foster
Welcome to the Linux operating system and the third edition of Linux Installation and Configuration! Whether you are looking for a version of UNIX that you can run on an inexpensive PC or are just totally disgusted with the antics of Microsoft et al. when it comes to operating systems, we think you’ll get a lot out of this book.
In these pages, you’ll be guided through a Linux installation and configuration process from beginning to end. You’ll also learn about the many unique tools offered by the Linux operating system, as well how to use these tools in a variety of situations.
LINUX® Configuration and Installation

Programming Linux Games

Loki Software, Inc.
with John R. Hall
This book is for anyone who wants to learn how to write games for Linux. I assume that you know the basics of working with Linux; if you know enough to start X, open a terminal, copy les around, and re up a text editor, you're good to go. I also assume that you have a reasonable grasp of the C programming language. Flip through the book and see if you can decipher the syntax of the examples. We'll go through all of the necessary library calls, so don't worry if you see a bunch of unfamiliar function names, but you should be able to understand the majority of the actual code. No prior experience with multimedia programming is assumed, so don't worry if you've never had the perverse pleasure of hacking a graphics register or shoving a pixel into memory.
Programming Linux Games

Slackware Linux Unleashed

By Kamran Husain
docs.rinet.ru
This book is about Linux, a clone of the UNIX operating system that runs on Intel 80x86-based machines, where x is 3 or higher.
You'll find a CD-ROM at the back of the book that contains the Slackware 96 release of the Linux operating system. With this CD-ROM and this book, you should, I hope, be up and running with a UNIX-like operating system in a few hours.
Linux is also very portable and flexible because it has now been ported to DEC Alpha, PowerPC, and even Macintosh machines. Some of these ports are not complete as this book goes to print, but progress is being made daily by Linux enthusiasts all over the world to make this free operating system available to all the popular computing machines in use today. Because the source code for the entire Linux operating system is freely available, developers can spend time actually porting the code instead of wondering about whom to pay hefty licensing fees.
Documentation for the many parts of Linux is not very far away either. The Linux Documentation Project (LDP) is an effort put together by many dedicated and very smart individuals to provide up-to-date, technically valuable information. All of this LDP information can be found on the Internet at various Linux source repositories. Snapshots of the LDP and other Linux documentation files are also provided on the CD-ROM at the back of this book. Each "HOWTO" document for Linux is the result of effort from many Linux enthusiasts. The original authors of these documents are usually also the core Linux developers who have put in hours of time and effort while struggling with new features of Linux.
These individuals are the ones who deserve the credit and glory for the success of Linux as a viable, powerful operating system.
Click to Read More

Sams Teach Yourself StarOffice® 5 for Linux™ in 24 Hours

by Sams Publishing
Installing StarOffice
This hour guides you through the process of installing StarOffice on your Linux system. Although installing StarOffice is quite simple, review the information presented here to ensure a smooth installation. The key to a smooth installation, no matter which Linux system you're running, is to be certain that the correct system libraries are installed and available to StarOffice.
Reviewing Linux System Requirements
If you follow the information in this section, you can get StarOffice running on basically any Linux system that meets the listed requirements. The system requirements for installing StarOffice 5 are listed in Table 1.1
System Requirements to Install StarOffice 5 for Linux
Linux kernel version - 2.0.x (or later stable version)
Linux library version - libc6, also called glibc2, version 2.0.7 (other library versions can be installed on your Linux system, but the correct version must be available to StarOffice, as described in the sections that follow)
System Memory - 32MB RAM
Hard Disk Space - 11-140MB, depending on installation type
X Window System graphics - 256 or more colors or grayscales
Although these requirements are straightforward, note that compared to many Linux programs, StarOffice requires a fair amount of memory and hard disk space. The more you have, the more smoothly StarOffice runs.

Red Hat Linux Unleashed

docs.rinet.ru
This book is about Linux, a clone of the UNIX operating system that runs on machines with an Intel 80386 processor or better, as well as Intel-compatible CPUs, such as AMD and Cyrix.
This first chapter introduces you to the major features of Linux and helps get you acquainted with them. It does not go into great detail or cover any advanced topics, as this is done in later chapters. Instead, it is intended to give you a head start in understanding what Linux is, what Linux offers you, and what you need to run it.
Don't be afraid to experiment. The system won't bite you. You can't destroy anything by working on it. UNIX has some amount of security built in, to prevent "normal" users (the role you will now assume) from damaging files that are essential to the system. The absolute worst thing that can happen is that you'll delete all of your files and have to go back and reinstall the system. So, at this point, you have nothing to lose.
One word of caution when reading this chapter: At times it will delve into topics that may seem very alien to you, especially if you are new to UNIX and Linux. Don't despair. As you go through this book, you will become more and more familiar with the topics introduced here. Linux is not an easy system to pick up in one day, so don't try to do it. There is no substitute for experience, so relax and learn Linux at your own pace.

Red Hat® Linux 6 Unleashed

Copyright 1999 by Sams
Welcome to Red Hat Linux! This book has brought together a team of authors to help you learn the details about installing, administering, and using the latest version of the best alternative computer operating system for today's PCs. In the back of this book you'll find a CD-ROM that contains Red Hat Linux 6.0, the most recent version, as well as all the software you need to get started. Linux is the core of the operating system, the kernel, while the Linux operating system and its collection of software are formally known as the distribution. Many of the programs in the Linux distribution come from Berkeley Software Distribution, or BSD UNIX, and the Free Software Foundation's GNU software suite. Linux melds SysV UNIX and BSD features with POSIX compliance and has inherited many of the best features from more than 25 years of UNIX experience. Linux has also helped provide the recent impetus for the Open Source Software movement.
First released on October 5, 1991, by its author and trademark holder, Linus Torvalds, and then at the University of Helsinki (now at Transmeta in California), Linux has spawned an increasingly vocal legion of advocates, users, and contributors from around the world. Originally written as a hobby, Linux now supports nearly all the features of a modern multitasking, multiuser operating system.
Red Hat, Inc., is a computer software development company that has sold products and provided services related to Linux since 1993, and whose revenues have gone from a little over $400,000 to more than $10 million in the last several years. Red Hat's mission is to "provide professional tools to computing professionals." Red Hat provides these professionals tools by doing the following:
  • Building tools, which Red Hat releases as freely redistributable software available for unrestricted download from thousands of sites on the Internet
  • Publishing books and software applications
  • Manufacturing shrink-wrapped software versions of the Linux OS, making Linux accessible to the broadest possible range of computer users
  • Providing technical support

Red Hat's customer-oriented business focus forces it to recognize that the primary benefits of the Linux OS are not any of the particular advanced and reliable features for which it is famous. The primary benefit is the availability of complete source code and its freely distributable GNU General Public License (also known as the GPL; see the GNU GENERAL PUBLIC LICENSE in the back of this book). This gives any home, corporate, academic, or government user the ability to modify the technology to his or her needs and to contribute to the ongoing development of the technology to the benefit of all users. Working with Linux provides benefits such as security and reliability that commercially restricted, binary-only operating systems simply cannot match. Some of these benefits follow:

There are no royalty or licensing fees. Linus Torvalds has control over the Linux trademark, but the Linux kernel and much of the accompanying software is distributed under the GNU GPL.

Linux runs on nearly any CPU. Linux runs on more CPUs and different platforms than any other computer operating system. One of the reasons for this, besides the programming talents of its rabid followers, is that Linux comes with source code to the kernel and is quite portable. Linux for Intel-based computers (typically known as PCs) can be found on this book's CD-ROM.

Click to Read More

XLib Manual

by The Labs.Com
The X Window System is a network-transparent window system that was designed at MIT. X display servers run on computers with either monochrome or color bitmap display hardware. The server distributes user input to and accepts output requests from various client programs located either on the same machine or elsewhere in the network. Xlib is a C subroutine library that application programs (clients) use to interface with the window system by means of a stream connection. Although a client usually runs on the same machine as the X server it is talking to, this need not be the case.
Xlib --- C Language X Interface is a reference guide to the low-level C language interface to the X Window System protocol. It is neither a tutorial nor a user's guide to programming the X Window System. Rather, it provides a detailed description of each function in the library as well as a discussion of the related background information. Xlib --- C Language X Interface assumes a basic understanding of a graphics window system and of the C programming language. Other higher-level abstractions (for example, those provided by the toolkits for X) are built on top of the Xlib library. For further information about these higher-level libraries, see the appropriate toolkit documentation. The X Window System Protocol provides the definitive word on the behavior of X. Although additional information appears here, the protocol document is the ruling document.
To provide an introduction to X programming, this chapter discusses:

Documentation for XFree86[tm] version 4.3.0

The XFree86 Project, Inc
XFree86 is an Open Source version of the X Window System that supports many UNIX(R) and UNIX-like operating systems (such as Linux, FreeBSD, NetBSD, OpenBSD and Solaris x86) on Intel and other platforms. This version is compatible with X11R6.6.
XFree86 4.3.0 is the sixth full release in the XFree86 4.x series.
XFree86 4.x is the current XFree86 release series. The first release in this series was in early 2000. The core of XFree86 4.x is a modular X server. The 4.3.0 version is a new release that includes additional hardware support, functional enhancements and bug fixes. Specific release enhancements can be viewed in the Release Notes.
Most modern PC video hardware is supported in XFree86 4.3.0, and most PC video hardware that isn't supported explicitly can be used with the "vesa" driver. The Driver Status document has a summary of what hardware is supported in 4.3.0 compared with the old 3.3.x (3.3.6) series. It is a good idea to check there before upgrading if you are currently running 3.3.6 with older hardware.
XFree86 is produced by The XFree86 Project, Inc, which is a group of mostly volunteer independent developers. XFree86 is a non-commercial organisation, and would not be viable without the invaluable development contributions of volunteers. This release is dedicated to all who have supported and contributed to XFree86 over the last eleven years.

Overview of Motif 2.0

The Open Group © 1995-2005
opengroup.org
Five years after its appearance on the market, OSF/Motif has become the major Graphical User Interface (GUI) technology for Open Systems, as well as a de jure standard (IEEE P1295). The previous version of OSF/Motif (Release 1.2) introduced major new features such as internationalization, drag-and-drop and tear-off menus. Those features were intended to allow application developers to produce interoperable, easy to use applications for a worldwide market. As a result, this technology has been selected to become the basis of the Common Desktop Environment jointly developed by HP, IBM, Novell and SunSoft, proposed to become an X/Open standard.
Every Motif release contains new features that help the end user community (e.g. drag and drop in 1.2) or the developer's community: programming features that are invisible from the end users but make developer's life much easier (e.g. representation types in 1.2). OSF Motif 2.0 is no exception. It includes items for developers such as the extensibility framework and the uniform transfer model, and extension for end users such as virtual screen support and workspace management. And it also contains new widgets that actually serve both the end user community and the programmers.
For end users, Motif 2.0 presents the following new features reviewed in this paper:
  • virtual screen support
  • workspace management
  • new widgets increasing ease of use and providing more direct manipulation of application objects.

For software developers, Motif 2.0 provides:

  • the extensibility framework. The Motif toolkit is based on the Xt object-oriented framework. As such it presents the major capabilities of object oriented systems, such as inheritance. But the truth is, a developer needs a hard-gained knowledge and experience with Motif to implement a subclass of a Motif widget with the Motif look and feel. It actually requires the developer to have access to the Motif source code itself.

Click to Read More

Motif Programming

By A. D. Marshall
This book introduces the fundamentals of Motif programming and addreses wider issues concerning the X Window system. The aim of this book is to provide a practical introduction to writing Motif programs. The key principles of Motif programming are always supported by example programs.
The X Window system is very large and this book does not attempt to detail every aspect of either X or Motif. This book is not intended to be a complete reference on the subject.
The book is organised into logical parts, it begins by introducing the X Window system and Motif and goes on to study individual components in detail in specific Chapters. In the remainder of this Chapter we concentrate on why Motif and related areas are important and give a brief history of the development of Motif.

Inside LessTif

By Harald Albrecht
Synthetic resources are a mechanism included in Motif that allows a developer to modify resource values as collected by or assigned to the Xt resource mechanism. That is, if a user should want to find the value of an Xt resource, but M*TIF would rather that the user not see the true value, the synthetic resource mechanism allows the M*TIF developer to ``fake out'' the Intrinsics, and replace the true instance variable values with modified values. Alternatively, the toolkit may prefer to transform a user specified value into something more palatable by the toolkit.
The more common usage of synthetic resources is to support resolution independence (see figure ). However, the toolkit developers also realized that the mechanism provided a way to protect ``delicate'' resources. For example, those that it would be dangerous for the user to change, or those that would upset the toolkit if they were unexpectedly modified.

The LessTif Homepage

LessTif is the Hungry Programmers' version of OSF/Motif®. It aims to be source compatible meaning that the same source code should compile with both and work exactly the same! LessTif is "free software": it is licensed under the GNU Library General Public License (LGPL). You might also want to check out The Open Source Web for a little more information about the Open Source philosophy in general.
The current version of LessTif is 0.95.0 as of June 10, 2006. The code is available for download in various packages.

The gdk-pixbuf Library

By Federico Mena Quintero
API Reference included in this gdk pixbuf library are
  • Initialization and Versions - Library version numbers.
  • The GdkPixbuf Structure - Information that describes an image.
  • Reference Counting and Memory Mangement - Functions for reference counting and memory management on pixbufs.
  • File Loading - Loading a pixbuf from a file.
  • File saving - Saving a pixbuf to a file.
  • Image Data in Memory - Creating a pixbuf from image data that is already in memory.
  • Inline data - Functions for inlined pixbuf handling.
  • Scaling - Scaling pixbufs and scaling and compositing pixbufs
  • Rendering - Rendering a pixbuf to a GDK drawable.
  • Drawables to Pixbufs - Getting parts of a GDK drawable's image data into a pixbuf.
  • Utilities - Utility and miscellaneous convenience functions.
  • Animations - Animated images.
  • GdkPixbufLoader - Application-driven progressive image loading.
  • Module Interface - Extending gdk-pixbuf
  • gdk-pixbuf Xlib initialization - Initializing the gdk-pixbuf Xlib library.
  • Xlib Rendering - Rendering a pixbuf to an X drawable.
  • X Drawables to Pixbufs - Getting parts of an X drawable's image data into a pixbuf.
  • XlibRGB - Rendering RGB buffers to X drawables.

Tools Reference included in this library are

  • gdk-pixbuf-csource - C code generation utility for GdkPixbuf images
  • gdk-pixbuf-query-loaders - GdkPixbuf loader registration utility

Click to Read More

PHP-GTK 2 Tutorials

© 2001 - 2006 the PHP-GTK Documentation Group
Welcome to the user manual of PHP-GTK 2! This manual should help you get started with PHP-GTK 2 and also provide a comprehensive reference to most aspects of the language.
This manual is split into two main parts. The first part is the Tutorials section. This part will help you get started with PHP-GTK 2 programming and provide some insight into the various aspects of designing applications with PHP-GTK 2. The other part is the Reference section. This part of the manual provides details on all GTK objects and their associated methods and signals. This should be useful whenever you are in doubt of how a particular method or object is used.
Although we have taken great care in ensuring that all of the information in the manual is correct, it is possible that some errors crept in. Please do inform the PHP-GTK documentation group: php-gtk-doc@lists.php.net in case you encounter such errors. If something you want is not present in the manual, do not hesitate to post your question to PHP-GTK-General mailing list: php-gtk-general@lists.php.net.
This manual was produced using a modified version of the Docbook DTD. The modifications were made to document the object system used by PHP-GTK 2 in an easier manner. The XML basis for each class and their methods was initially generated automatically from the PHP-GTK 2 source code, and is updated via PHP5's Reflection to ensure that the documentation stays in-sync with the source.
The XML generator was written by Andrei Zmievski (the original author of PHP-GTK itself) and was modified by Christian Weiske. The documentation is transformed from its XML source into various other formats using XSL stylesheets as well as a host of other tools. The manual build system is maintained by Steph Fox.
We hope you enjoy reading the manual as much as we enjoyed making it!

Gtk2-Perl Tutorial

GTK2-Perl is the collective name for a set of perl bindings for GTK+ 2.x and various related libraries. These modules make it easy to write Gtk and Gnome applications using a natural, perlish, object-oriented syntax.

GTK+ is a GUI toolkit for developing graphical applications that run on POSIX systems such as Linux, Windows and MacOS X (provided that an X server for MacOS X has been installed). It provides a comprehensive set of widgets, and supports Unicode and bidirectional text. It links into the Gnome Accessibility Framework through the ATK library.

Perl is a stable, multi-platform programming language, used throughout the entire Internet and in many mission-critical environments.

GTK2-Perl is part of the official GNOME Platform Bindings

GDK Reference Manual

developer.gnome.org

API reference included in this GDK manual are

  • General - Library initialization and miscellaneous functions
  • Multi-head Support Overview - Overview of GdkDisplay and GdkScreen
  • GdkDisplay - Controls the keyboard/mouse pointer grabs and a set of GdkScreens
  • GdkDisplayManager - Maintains a list of all open GdkDisplays
  • GdkScreen - Object representing a physical screen
  • Points, Rectangles and Regions - Simple graphical data types
  • Graphics Contexts - Objects to encapsulate drawing properties
  • Drawing Primitives - Functions for drawing points, lines, arcs, and text
  • Bitmaps and Pixmaps - Offscreen drawables
  • GdkRGB - Renders RGB, grayscale, or indexed image data to a GdkDrawable
  • Images - A client-side area for bit-mapped graphics
  • Pixbufs - Functions for rendering pixbufs on drawables
  • Colormaps and Colors - Manipulation of colors and colormaps
  • Visuals - Low-level display hardware information
  • Fonts - Loading and manipulating fonts
  • Cursors - Standard and pixmap cursors
  • Windows - Onscreen display areas in the target window system
  • Events - Functions for handling events from the window system
  • Event Structures - Data structures specific to each type of event
  • Key Values - Functions for manipulating keyboard codes
  • Selections - Functions for transfering data via the X selection mechanism
  • Drag and Drop - Functions for controlling drag and drop handling
  • Properties and Atoms - Functions to manipulate properties on windows
  • Threads - Functions for using GDK in multi-threaded programs
    Input - Callbacks on file descriptors
  • Input Devices - Functions for handling extended input devices
  • Pango Interaction - Using Pango in GDK
  • Cairo Interaction - Functions to support using Cairo
  • X Window System Interaction - X backend-specific functions

Click to Read More

Sams UNIX Unleashed, System Administrator's Edition

Macmillan Computer Publishing
The first volume, UNIX Unleashed, Systems Administrator Edition, consists of three major sections or parts. The general focus is getting you started using UNIX, working with the shells, and then administering the system.
Part I, Introduction to UNIX, is designed to get you started using UNIX. It provides you with the general information on the organization of the UNIX operating system, how and where to find files, and the commands a general user would want to use. Information is also provided on how to get around the network and communicating with other users on the system.
Part II, UNIX Shells, provides you the information on how to choose which shell to use and how to use that shell. The most popular shells: Bourne, Bourne Again (BASH), Korn, and C, are covered as well as a comparison between them. Under UNIX, the shell is what provides the user interface to the operating system.
Part III, System Administration, gets you started and keeps you going with the tasks required to administer a UNIX system. From installation through performance and tuning, the important topics are covered. The general duties of the system administrator are described (so you can build a job description to give to your boss). In case you are working on a brand-new UNIX system, the basics of UNIX installation are covered. Other topics covered in this section include: starting and stopping UNIX, user administration, file system and disk administration, configuring the kernel (core of the operating system), networking UNIX systems, accounting for system usage, device (add-on hardware) administration, mail administration, news (known as netnews or UseNet) administration, UUCP (UNIX to UNIX Copy Program, an early networking method still in wide use today) administration, FTP (File Transfer Protocol) administration, and finally, backing up and restoring files.

UNIX Systems Administration

Version 2.2
By Wong Kin
The first UNIX was created by Ken Thompson in 1969 at AT&T's Bell Laboratories (Bell Labs). This primitive operating system was firstly implemented on a DEC PDP-7 machine with a teletype (i.e., the tty) and a then good graphic display, was able to run a simulation game called `Space Travel', also developed by Thompson. This gave people faith that it was an usable system.
UNIX was initially tied up to DEC PDP machines until Brian Kernighan joined the development team by introducing the first C complier. In 1973, the UNIX kernel was re-written in C. This tack, allowing UNIX to port from one type of processor to another by simply recompiling its C source code, contributed greatly to its later popularity.
The cryptic name `UNIX' could be a misnomer to most people. Is it jocular? Dubious. But Brian Kernighan, who coined the name, certainly thought so. Before that, it was originally called `UNICS' which stands for Uniplexed Information and Computing System, a two-user system.
At its infant stage, UNIX was not made commercial by AT&T due to the US Anti-Trust laws. Despite that, the source code of UNIX (Fifth Edition) was freely available to some colleges and universities for educational purposes which galvanized many enhancement projects on UNIX. The system has since then prevailed the academic communities and later the industry.
In another development, with the help of Ken Thompson et al, two graduate students, Bill Joy and Chuck Haley, at the University of California at Berkeley built a new UNIX distribution by putting together AT&T's Sixth Edition UNIX and sundries of other software pieces. They called it Berkeley Software Distribution which was more well-known in its acronym BSD. In 1979, AT&T released the Seventh Edition of UNIX which included a K&R C complier and Bourne Shell (sh).
Meanwhile, some companies were porting UNIX for commercial use. An example of these was XENIX, jointly developed by the Microsoft Corporation and the Santa Cruz Operation (SCO). By the mid-'80s, with the success of Sun Microsystems' UNIX workstations, companies like HP, DEC, IBM and SGI, one after another, jumped on the UNIX bandwagon by developing a slightly different variant each. The UNIX realm was expanding rapidly. The kind of glowing demand eventually propelled AT&T to be resolute in that they should also produce a commercial version of UNIX. The first commercial release was unveiled in 1982, known as System III. Prior to System III, UNIX was only used at Bell Laboratories internally.
While AT&T started marketing its own UNIX, it allowed other companies to license it and sell it as a product. This amusing dilemma meant that AT&T was competing with its licensees in the same market.
In view of this, the Open Software Foundation (OSF) was formed by a group of UNIX vendors and organizations, including IBM, DEC and HP in the late 80's. Their result effort was OSF/1 - a non-AT&T dependent UNIX-like operating system.
In response, AT&T decided to sell its UNIX software company, UNIX System Laboratories, Inc. (USL), to a third party so as to form an independent company. In June 1993, Novell, Inc. (the maker of NetWare and UnixWare) bought USL and the trademark of UNIX.
As of December 1995, Santa Cruz Operation (SCO) acquired the UNIX business from Novell. The SVR4 source code is therefore the property of Santa Cruz Operation (SCO), Inc. and is distributed by SCO, Inc. through licensing. For this reason, publishing the source code or any part of it is illegal.

Unix System Administration By Frank G. Fiamingo

By Frank G. Fiamingo
Systems administration is the installation and maintenance of the UNIX computer system. The system administrator will need to maintain the software and hardware for the system. This includes hardware configuration, software installation, reconfiguration of the kernel, networking, and anything else that's required to make the system work and keep it running in a satisfactory manner. To do this the system administrator can assume superuser, or root, privileges to perform many tasks not normally available to the average user of the system.
Daily Tasks of a System Administrator
1.2.1 - Manage user logins
1.2.2 - Monitor system activity and security
1.2.3 - Administer file systems, devices, and network services

VIM Quick Reference Card

By Laurent Grégoire
Everything you need to know to master VIM, a free vi clone running under various platforms. The VIM Quick Reference Card is released under the GNU GPL (General Public Licence).
This card contains the most used commands, sorted by category, on the following topics:
  • Basic and advanced movement;
  • Inserting and replacing text;
  • Ex commands;
  • Copying and transforming;
  • Visual mode, screen commands;
  • Tags, mapping and abreviations;
  • And much more...!

Handy on your desk as a quick guide, you will always learn one more funky vim command to improve once again your C++ hourly coding rate, or impress your notepad user colleagues! The card is divided on three columns and its printout is designed to be folded twice to produce an easy-to-handle quick reference card (hence the name.)

Click to Read More

Vim Cookbook

by Steve Oualline
This is the Vim cookbook page. It contains short recipes for doing many simple and not so simple things in Vim. You should already know the basics of Vim, however each command is explained in detail.
Each set of instructions is a complete package. Feel free to pick and choose what you need.
Character twiddling
If you type fast your fingers can easily get ahead of your mind. Frequently people transpose characters. For example the word "the" comes out "teh".
To swap two characters, for example "e" with "h", put the cursor on the cursor on the "e" and type xp.
The "x" command deletes a character (the "e") and the "p" pastes it after the cursor (which is now placed over the "h".)

The Vim commands cheat sheet - 1.1

By Nana LÃ¥ngstedt
A cheat sheet of some useful and most often used Vim commands. This Vim cheat sheet isn't trying to include all the Vim commands in the known universe, but should list the most essential ones.
:e filename
- Open a new file. You can use the Tab key for automatic file name completion, just like at the shell command prompt.
:w filename
- Save changes to a file. If you don't specify a file name, Vim saves as the file name you were editing. For saving the file under a different name, specify the file name.

The Vi/Ex Editor

By Walter Alan Zintz
To get a real grasp on this editor's power, you need to know the basic ideas embodied in it, and a few fundamental building blocks that are used throughout its many functions.
One cause of editor misuse is that most users, even experienced ones, don't really know what the editor is good at and what it's not capable of. Here's a quick rundown on its capabilities.
First, it's strictly a general-purpose editor. It doesn't format the text; it doesn't have the handholding of a word processor; it doesn't have built-in special facilities for editing binaries, graphics, tables, outlines, or any programming language except Lisp.
It's two editors in one. Visual mode is a better full-screen editor than most, and it runs faster than those rivals that have a larger bag of screen-editing commands. Line editing mode dwarfs the ``global search and replace'' facilities found in word processors and simple screen editors; its only rivals are non-visual editors like Sed where you must know in advance exactly what you want to do. But in the Vi/Ex editor, the two sides are very closely linked, giving the editor a combination punch that no other editor I've tried can rival.
Finally, this editor is at its best when used by people who have taken the trouble to learn it thoroughly. It's too capable to be learned well in an hour or two, and too idiosyncratic to be mastered in a week, and yet the power really is in it, for the few who care to delve into it. A large part of that power requires custom-programming the editor: that's not easy or str aightforward, but what can be done by the skillful user goes beyond the direct programmability of any editor except (possibly) Emacs.

Mastering the VI editor

University of Hawaii at Manoa
College of Engineering
Introduction
The VI editor is a screen-based editor used by many Unix users. The VI editor has powerful features to aid programmers, but many beginning users avoid using VI because the different features overwhelm them. This tutorial is written to help beginning users get accustomed to using the VI editor, but also contains sections relevant to regular users of VI as well. Examples are provided, and the best way to learn is to try these examples, and think of your own examples as well... There's no better way than to experience things yourself.
Conventions
In this tutorial, the following convention will be used:
^X denotes a control character. For example, if you see: ^d in the tutorial, that means you hold down the control key and then type the corresponding letter. For this example, you would hold down the control key and then type d.

The Makefile

opussoftware.com
Make reads its instructions from text files. An initialization file is read first, followed by the makefile. The initialization file holds instructions for all “makes” and is used to customize the operation of Make. Make automatically reads the initialization file whenever it starts up. Typically the initialization file is named make.ini and it resides in the directory of make.exe and mkmf.exe. The name and location of the initialization file is discussed in detail on Page .
The makefile has instructions for a specific project. The default name of the makefile is literally makefile, but the name can be specified with a command-line option.
With a few exceptions, the initialization file holds the same kind of information as does a makefile. Both the initialization file and the makefile are composed of the following components: comments, dependency lines, directives, macros, response files, rules and shell lines.

Introduction to make

nersc.gov
The UNIX make utility facilitates the creation and maintenance of executable programs from source code. This tutorial will introduce the simple usage of the make utility with the goal of building an executable program from a series of source code files.
The UNIX make utility facilitates the creation and maintenance of executable programs from source code. make keeps track of the commands needed to build the code and when changes are made to a source file, recompiles only the necessary files. make creates and updates programs with a minimum of effort.
A small initial investment of time is needed to set up make for a given software project, but afterward, recompiling and linking is done consistently and quickly by typing one command: make, instead of issuing many complicated command lines that invoke the compiler and linker.
This tutorial will introduce the simple usage of the make utility with the goal of building an executable program from a series of source code files. Most of the varied, subtle, and complex features of make are the subject of entire books and are not covered here. See the NERSC UNIX Resources page for more information.
This tutorial assumes that you have some familiarity with UNIX, text editors and compiling programs from source code.

A GNU Make Tutorial

By Byron Weber Becker
Make is a utility which uses a script, called a makefile to automatically determine which of a sequence of steps must be repeated because some files have changed. Two of the most common uses are
  • Recompiling programs residing in several files, and
  • Testing programs.

Since we are using Modula-3, which has its own make-like facility, this document will focus on using make for testing.

There are many versions of make in use. This tutorial assumes GNU Make, distributed freely by the Free Software Foundation. It has a number of features which make it more attractive than the standard Unix make.

Rules

Make uses instructions found in a file named makefile or Makefile to determine what actions to take in order to satisfy some requirement. A simple makefile consists of "rules" or "recipes" that describe how to create a particular target. Each rule has the following shape:

target... : dependencies ...
command
...
...

A target may be either a file to be generated by make or an identifier for an action to be carried out. Make determines that it needs to build a target if one or more dependencies have changed since the target was last built.

Click to Read More

An Introduction to the UNIX Make Utility

mtsu.edu
This paper is a short introduction to the UNIX make utility. The intended audience is computer science students at Middle Tennessee State University (MTSU) of intermediate ability level, if you're taking CSCI 217 this paper will be of use to you. Although make can be used in conjunction with most programming languages all examples given here use C++ as this is the most common programming language used at MTSU. It is assumed that you have a good understanding of a C++ compiler. As an introduction this paper intends to teach the reader how to use the most common features of make. A more comprehensive guide may be found by examining the list of references provided.
Layout guide

Throughout the paper various text styles will be used to add meaning and focus on key points. All references to the make utility, file names and any sample output use the fixed font style, i.e. fixed font example. If the example is prefixed with a percent character ( % ) it is a UNIX C-shell command line. Words that are key to make terminology are highlighted in bold when they occur for the first time.
Overview

The make utility is a software engineering tool for managing and maintaining computer programs. Make provides most help when the program consists of many component files. As the number of files in the program increases so to does the compile time, complexity of compilation command and the likelihood of human error when entering command lines, i.e. typos and missing file names.
By creating a descriptor file containing dependency rules, macros and suffix rules, you can instruct make to automatically rebuild your program whenever one of the program's component files is modified. Make is smart enough to only recompile the files that were affected by changes thus saving compile time.

The GNU Awk User's Guide

By Arnold Robbins
The name awk comes from the initials of its designers: Alfred V. Aho, Peter J. Weinberger and Brian W. Kernighan. The original version of awk was written in 1977 at AT&T Bell Laboratories. In 1985, a new version made the programming language more powerful, introducing user-defined functions, multiple input streams, and computed regular expressions. This new version became widely available with Unix System V Release 3.1 (SVR3.1). The version in SVR4 added some new features and cleaned up the behavior in some of the “dark corners” of the language. The specification for awk in the POSIX Command Language and Utilities standard further clarified the language. Both the gawk designers and the original Bell Laboratories awk designers provided feedback for the POSIX specification.
Paul Rubin wrote the GNU implementation, gawk, in 1986. Jay Fenlason completed it, with advice from Richard Stallman. John Woods contributed parts of the code as well. In 1988 and 1989, David Trueman, with help from me, thoroughly reworked gawk for compatibility with the newer awk. Circa 1995, I became the primary maintainer. Current development focuses on bug fixes, performance improvements, standards compliance, and occasionally, new features.
In May of 1997, Jürgen Kahrs felt the need for network access from awk, and with a little help from me, set about adding features to do this for gawk. At that time, he also wrote the bulk of TCP/IP Internetworking with gawk (a separate document, available as part of the gawk distribution). His code finally became part of the main gawk distribution with gawk version 3.1.

Getting started with awk

HMC Computer Science Department
This qref is written for a semi-knowledgable UNIX user who has just come up against a problem and has been advised to use awk to solve it. Perhaps one of the examples can be quickly modified for immediate use.
awk reads from a file or from its standard input, and outputs to its standard output. You will generally want to redirect that into a file, but that is not done in these examples just because it takes up space. awk does not get along with non-text files, like executables and FrameMaker files. If you need to edit those, use a binary editor like hexl-mode in emacs.
The most frustrating thing about trying to learn awk is getting your program past the shell's parser. The proper way is to use single quotes around the program, like so:
>awk '{print $0}' filename
The single quotes protect almost everything from the shell. In csh or tcsh, you still have to watch out for exclamation marks, but other than that, you're safe.
The second most frustrating thing about trying to learn awk is the lovely error messages:
awk '{print $0,}' filename
awk: syntax error near line 1
awk: illegal statement near line 1
gawk generally has better error messages. At least it tells you where in the line something went wrong:
gawk '{print $0,}' filename
gawk: cmd. line:1: {print $0,}
gawk: cmd. line:1: ^ parse error
So, if you're having problems getting awk syntax correct, switch to gawk for a while.

Effective AWK Programming

A User's Guide for GNU Awk
By Arnold D. Robbins
The name awk comes from the initials of its designers: Alfred V. Aho, Peter J. Weinberger, and Brian W. Kernighan. The original version of awk was written in 1977 at AT&T Bell Laboratories. In 1985 a new version made the programming language more powerful, introducing user-defined functions, multiple input streams, and computed regular expressions. This new version became generally available with Unix System V Release 3.1. The version in System V Release 4 added some new features and also cleaned up the behavior in some of the "dark corners" of the language. The specification for awk in the POSIX Command Language and Utilities standard further clarified the language based on feedback from both the gawk designers, and the original Bell Labs awk designers.
The GNU implementation, gawk, was written in 1986 by Paul Rubin and Jay Fenlason, with advice from Richard Stallman. John Woods contributed parts of the code as well. In 1988 and 1989, David Trueman, with help from Arnold Robbins, thoroughly reworked gawk for compatibility with the newer awk. Current development focuses on bug fixes, performance improvements, standards compliance, and occasionally, new features.

Gawk

By Paul Rubin and Jay Fenlason
Gawk is the GNU Project's implementation of the AWK programming language. It conforms to the definition of the language in the POSIX 1003.2 Command Language And Utilities Standard. This version in turn is based on the description in The AWK Programming Language, by Aho, Kernighan, and Weinberger, with the additional features found in the System V Release 4 version of UNIX awk. Gawk also provides more recent Bell Laboratories awk extensions, and a number of GNU-specific extensions.
Pgawk is the profiling version of gawk. It is identical in every way to gawk, except that programs run more slowly, and it automatically produces an execution profile in the file awkprof.out when done. See the --profile option, below.
The command line consists of options to gawk itself, the AWK program text (if not supplied via the -f or --file options), and values to be made available in the ARGC and ARGV pre-defined AWK variables.

Awk by example, Part 3

String functions and ... checkbooks
By Daniel Robbins

In this conclusion to the awk series, Daniel introduces you to awk's important string functions, and then shows you how to write a complete checkbook-balancing program from scratch. Along the way, you'll learn how to write your own functions and use awk's multidimensional arrays. By the end of this article, you'll have even more awk experience, allowing you to create more powerful scripts.

Formatting output
While awk's print statement does do the job most of the time, sometimes more is needed. For those times, awk offers two good old friends called printf() and sprintf(). Yes, these functions, like so many other awk parts, are identical to their C counterparts. printf() will print a formatted string to stdout, while sprintf() returns a formatted string that can be assigned to a variable. If you're not familiar with printf() and sprintf(), an introductory C text will quickly get you up to speed on these two essential printing functions. You can view the printf() man page by typing "man 3 printf" on your Linux system.

Click to Read More

Awk by example, Part 2

By Daniel Robbins
In this sequel to his previous intro to awk, Daniel Robbins continues to explore awk, a great language with a strange name. Daniel will show you how to handle multi-line records, use looping constructs, and create and use awk arrays. By the end of this article, you'll be well versed in a wide range of awk features, and you'll be ready to write your own powerful awk scripts.
Multi-line records
Awk is an excellent tool for reading in and processing structured data, such as the system's /etc/passwd file. /etc/passwd is the UNIX user database, and is a colon-delimited text file, containing a lot of important information, including all existing user accounts and user IDs, among other things. In my previous article, I showed you how awk could easily parse this file. All we had to do was to set the FS (field separator) variable to ":".
By setting the FS variable correctly, awk can be configured to parse almost any kind of structured data, as long as there is one record per line. However, just setting FS won't do us any good if we want to parse a record that exists over multiple lines. In these situations, we also need to modify the RS record separator variable. The RS variable tells awk when the current record ends and a new record begins.

Awk by example, Part 1

An intro to the great language with the strange name
By Daniel Robbins
Awk is a very nice language with a very strange name. In this first article of a three-part series, Daniel Robbins will quickly get your awk programming skills up to speed. As the series progresses, more advanced topics will be covered, culminating with an advanced real-world awk application demo.
In this series of articles, I'm going to turn you into a proficient awk coder. I'll admit, awk doesn't have a very pretty or particularly "hip" name, and the GNU version of awk, called gawk, sounds downright weird. Those unfamiliar with the language may hear "awk" and think of a mess of code so backwards and antiquated that it's capable of driving even the most knowledgeable UNIX guru to the brink of insanity (causing him to repeatedly yelp "kill -9!" as he runs for coffee machine).
Sure, awk doesn't have a great name. But it is a great language. Awk is geared toward text processing and report generation, yet features many well-designed features that allow for serious programming. And, unlike some languages, awk's syntax is familiar, and borrows some of the best parts of languages like C, python, and bash (although, technically, awk was created before both python and bash). Awk is one of those languages that, once learned, will become a key part of your strategic coding arsenal.

An Awk Primer

From vectorsite.net

The Awk text-processing programming language is a useful and simple tool for manipulating text. This document provides a quick tutorial for Awk. The Awk syntax used in this document corresponds to that used on UN*X. It may vary slightly on other platforms.

The Awk text-processing language is useful for such tasks as:

  • Tallying information from text files and creating reports from the results.
  • Adding additional functions to text editors like "vi".
  • Translating files from one format to another.
  • Creating small databases.
  • Performing mathematical operations on files of numeric data.

Awk has two faces: it is a utility for performing simple text-processing tasks, and it is a programming language for performing complex text-processing tasks.

The two faces are really the same, however. Awk uses the same mechanisms for handling any text-processing task, but these mechanisms are flexible enough to allow useful Awk programs to be entered on the command line, or to implement complicated programs containing dozens of lines of Awk statements.

Awk statements comprise a programming language. In fact, Awk is useful for simple, quick-and-dirty computational programming. Anybody who can write a BASIC program can use Awk, although Awk's syntax is different from that of BASIC. Anybody who can write a C program can use Awk with little difficulty, and those who would like to learn C may find Awk a useful stepping stone, with the caution that Awk and C have significant differences beyond their many similarities.

There are, however, things that Awk is not. It is not really well suited for extremely large, complicated tasks. It is also an "interpreted" language -- that is, an Awk program cannot run on its own, it must be executed by the Awk utility itself. That means that it is relatively slow, though it is efficient as interpretive languages go, and that the program can only be used on systems that have Awk. There are translators available that can convert Awk programs into C code for compilation as stand-alone programs, but such translators have to be purchased separately.

One last item before proceeding: What does the name "Awk" mean? Awk actually stands for the names of its authors: "Aho, Weinberger, & Kernighan". Kernighan later noted: "Naming a language after its authors ... shows a certain poverty of imagination." The name is reminiscent of that of an oceanic bird known as an "auk", and so the picture of an auk often shows up on the cover of books on Awk.

Click to Read More

UNIX and Linux sed

This tutorial is meant as a brief introductory guide to sed that will help give the beginner a solid foundation regarding how sed works. It's worth noting that the tutorial also omits several commands, and will not bring you to sed enlightenment in itself. To reach sed enlightenment, your best bet is to follow the seders mailing list. to do that , send email to Al Aab
Prerequisites
It is assumed that the reader is familiar with regular expressions. If this is not the case, read the grep tutorial which includes information on regular expressions. On this page, we just give a brief revision.
Sed regular expressions
The sed regular expressions are essentially the same as the grep regular expressions. They are summarized below.
^ matches the beginning of the line
$ matches the end of the line
. Matches any single character
For Sed - An Introduction and Tutorial by Bruce Barnett

Sed - stream editor

The Single UNIX ® Specification, Version 2
Copyright © 1997 The Open Group
NAME
sed - stream editor
SYNOPSIS
sed [-n] script[file...]
sed [-n][-e script]...[-f script_file]...[file...]
DESCRIPTION
The sed utility is a stream editor that reads one or more text files, makes editing changes according to a script of editing commands, and writes the results to standard output. The script is obtained from either the script operand string or a combination of the option-arguments from the -e script and -f script_file options.
OPTIONS
The sed utility supports the XBD specification, Utility Syntax Guidelines , except that the order of presentation of the -e and -f options is significant.
The following options are supported:
-e script
Add the editing commands specified by the script option-argument to the end of the script of editing commands. The script option-argument has the same properties as the script operand, described in the OPERANDS section.
-f script_file
Add the editing commands in the file script_file to the end of the script.
-n
Suppress the default output (in which each line, after it is examined for editing, is written to standard output). Only lines explicitly selected for output will be written.
Multiple -e and -f options may be specified. All commands are added to the script in the order specified, regardless of their origin.
OPERANDS
The following operands are supported:
file
A pathname of a file whose contents will be read and edited. If multiple file operands are specified, the named files will be read in the order specified and the concatenation will be edited. If no file operands are specified, the standard input will be used.
script
A string to be used as the script of editing commands. The application must not present a script that violates the restrictions of a text file except that the final character need not be a newline character.

Sed by example, Part 3

Get to know the powerful UNIX editor
By Daniel Robbins
In this conclusion of the sed series, Daniel Robbins gives you a true taste of the power of sed. After introducing a handful of essential sed scripts, he'll demonstrate some radical sed scripting by converting a Quicken .QIF file into a text-readable format. This conversion script is not only functional, it also serves as an excellent example of sed scripting power.
Muscular sed
In my second sed article, I offered examples that demonstrated how sed works, but very few of these examples actually did anything particularly useful. In this final sed article, it's time to change that pattern and put sed to good use. I'll show you several excellent examples that not only demonstrate the power of sed, but also do some really neat (and handy) things. For example, in the second half of the article, I'll show you how I designed a sed script that converts a .QIF file from Intuit's Quicken financial program into a nicely formatted text file. Before doing that, we'll take a look at some less complicated yet useful sed scripts.
Text translation
Our first practical script converts UNIX-style text to DOS/Windows format. As you probably know, DOS/Windows-based text files have a CR (carriage return) and LF (line feed) at the end of each line, while UNIX text has only a line feed. There may be times when you need to move some UNIX text to a Windows system, and this script will perform the necessary format conversion for you.
$ sed -e 's/$/\r/' myunix.txt > mydos.txt
In this script, the '$' regular expression will match the end of the line, and the '\r' tells sed to insert a carriage return right before it. Insert a carriage return before a line feed, and presto, a CR/LF ends each line. Please note that the '\r' will be replaced with a CR only when using GNU sed 3.02.80 or later. If you haven't installed GNU sed 3.02.80 yet, see my first sed article for instructions on how to do this.

Sed by example, Part 2

Get to know the powerful UNIX editor
By Daniel Robbins
Sed is a very powerful and compact text stream editor. In this article, the second in the series, Daniel shows you how to use sed to perform string substitution; create larger sed scripts; and use sed's append, insert, and change line commands
Sed is a very useful (but often forgotten) UNIX stream editor. It's ideal for batch-editing files or for creating shell scripts to modify existing files in powerful ways. This article builds on my previous article introducing sed.
Substitution!
Let's look at one of sed's most useful commands, the substitution command. Using it, we can replace a particular string or matched regular expression with another string. Here's an example of the most basic use of this command:
$ sed -e 's/foo/bar/' myfile.txt
The above command will output the contents of myfile.txt to stdout, with the first occurrence of 'foo' (if any) on each line replaced with the string 'bar'. Please note that I said first occurrence on each line, though this is normally not what you want. Normally, when I do a string replacement, I want to perform it globally. That is, I want to replace all occurrences on every line, as follows:
$ sed -e 's/foo/bar/g' myfile.txt
The additional 'g' option after the last slash tells sed to perform a global replace.
Here are a few other things you should know about the 's///' substitution command. First, it is a command, and a command only; there are no addresses specified in any of the above examples. This means that the 's///' command can also be used with addresses to control what lines it will be applied to, as follows:
$ sed -e '1,10s/enchantment/entrapment/g' myfile2.txt
The above example will cause all occurrences of the phrase 'enchantment' to be replaced with the phrase 'entrapment', but only on lines one through ten, inclusive.

Sed by example, Part 1

Get to know the powerful UNIX editor
By Daniel Robbins
In this series of articles, Daniel Robbins will show you how to use the very powerful (but often forgotten) UNIX stream editor, sed. Sed is an ideal tool for batch-editing files or for creating shell scripts to modify existing files in powerful ways.
In the UNIX world, we have a lot of options when it comes to editing files. Think of it -- vi, emacs, and jed come to mind, as well as many others. We all have our favorite editor (along with our favorite keybindings) that we have come to know and love. With our trusty editor, we are ready to tackle any number of UNIX-related administration or programming tasks with ease.
While interactive editors are great, they do have limitations. Though their interactive nature can be a strength, it can also be a weakness. Consider a situation where you need to perform similar types of changes on a group of files. You could instinctively fire up your favorite editor and perform a bunch of mundane, repetitive, and time-consuming edits by hand. But there's a better way.
Enter sed
It would be nice if we could automate the process of making edits to files, so that we could "batch" edit files, or even write scripts with the ability to perform sophisticated changes to existing files. Fortunately for us, for these types of situations, there is a better way -- and the better way is called "sed".
sed is a lightweight stream editor that's included with nearly all UNIX flavors, including Linux. sed has a lot of nice features. First of all, it's very lightweight, typically many times smaller than your favorite scripting language. Secondly, because sed is a stream editor, it can perform edits to data it receives from stdin, such as from a pipeline. So, you don't need to have the data to be edited stored in a file on disk. Because data can just as easily be piped to sed, it's very easy to use sed as part of a long, complex pipeline in a powerful shell script. Try doing that with your favorite editor.

UNIX Network Programming with TCP/IP

Client-Server and Internet Applications
By Alan Dix, Lancaster University
Three interrelated aspects:
  • TCP/IP protocol suite
  • standard Internet applications
  • coding using UNIX sockets API

Development of Internet & TCP/IP

  • 1968 First proposal for ARPANET – military & gov’t research Contracted to Bolt, Beranek & Newman
  • 1971 ARPANET enters regular use
  • 1973/4 redesign of lower level protocols leads to TCP/IP
  • 1983 Berkeley TCP/IP implementation for 4.2BSD public domain code
  • 1980s rapid growth of NSFNET – broad academic use
  • 1990s WWW and public access to the Internet

Click to Read More

The UNIX/Linux Operating System Networking/Internet

by Claude Cantin
Connecting to UNIX/Linux from MS Windows-based Systems
Although these course notes are for UNIX/Linux, many people use PCs running a Microsoft-based operating system such as Windows 95/98/2000/NT to access their UNIX/Linux servers. Traditionally they have relied on "stock" programs like telnet and ftp to access their systems. They have also used tools like Eudora or Outlook to read their UNIX mail.
Since the spring of 2001, all communication done with UNIX/Linux must be done through a secure channel. Between UNIX/Linux systems, that secure channel is created when using ssh and scp.
The Research Computing Support Group (RCSG) has put together a series of tools people can install on their PCs, to access the UNIX/Linux systems both within NRC, and from the NRC dial-up access.
The tools covered include putty and WinSCP:
  • putty is a SSH-based telnet-like client. It allows for secure communication between Windows and UNIX/linux, much the same way ssh does on the UNIX/linux platforms.
  • It has a wide range of configuration for fonts, colours, behaviour. If you run X on your PC, putty allows the tunnelling of X applications (option must be enabled within putty).
  • Its basic installation requirement is the download of one executable .exe file, but the full package includes command and batch capable utilities.
  • WinSCP is the UNIX/linux equivalent of scp. Graphical-based, it allows for the safe/encrypted transfer of files to/from Windows and UNIX/linux platforms.

More details about those tools, as well as downloadeable modules may be found at http://www.nrc.ca/imsb/rcsg/ras/ssh-clients.html

That web page also explains how GUI-based PC FTP tools, how mail tools like Eudora and Outlook, may be safely used, through the secure channel created by SSH.

Click to Read More

Sams UNIX Unleashed, Internet Edition

Robin Burk and David B. Horvath, CCP, et al
© Copyright, Macmillan Computer Publishing. All rights reserved.
Our highly popular first edition brought comprehensive, up-to-date information on UNIX to a wide audience. That original edition was already 1,600 pages. The new topics covered in this edition have obliged us to split the second edition into two volumes, namely, the System Administrator's Edition and the Internet Edition, which we'll refer to jointly as "the new" or the second edition. Though each volume can stand alone and may be read independently of the other, they form a complementary set with frequent cross-references. This new edition is written for:
  • People new to UNIX
  • Anyone using UNIX who wants to learn more about the system and its utilities
  • Programmers looking for a tutorial and reference guide to C, C++, Perl, awk, and the UNIX shells
  • System administrators concerned about security and performance on their machines
  • Webmasters and Internet server administrators
  • Programmers who want to write Web pages and implement gateways to server databases
  • Anyone who wants to bring his or her UNIX skills and knowledge base up-to-date

A lot has happened in the UNIX world since the first edition of UNIX Unleashed was released in 1994. Perhaps the most important change is the tremendous growth of the Internet and the World Wide Web. Much of the public Internet depends on UNIX-based servers. In addition, many corporations of all sizes have turned to UNIX as the environment for network and data servers. As UNIX fans have long known, the original open operating system is ideal for connecting heterogeneous computers and networks into a seamless whole.

Click to Read More

UNIX Systems Programming I & II

by Alan Dix
UNIX Systems Programming I
Content: File I/O, filters and file manipulation. Command line arguments and environment variables. Terminal handling and text based screen applications. Interrupt handling. Finding the time. Mixing C and scripts.
Objective: The attendee should leave the course able to produce programs similar to standard UNIX utilities (mv, rm etc.) using raw UNIX system calls and do basic screen manipulation (for text based editors, menu driven systems, forms etc.).
Prerequisites: Reasonable standard of C programming (should understand pointers, structures, functions).
UNIX Systems Programming II
Content: Advanced file I/O including special devices. Process handling (fork, exec etc.). Inter-process communication via pipes, pseudo terminals. and sockets. Blocking & non-blocking I/O, handling multiple I/O streams using select. Other miscellaneous system calls including timers. Locking and caching issues.
Objective:
The attendee should leave the course able to produce programs which generate, link and control multiple processes, the pre-requisite for more advanced client­server and network-based applications.
Prerequisites: Reasonable standard of C programming plus an understanding of basic UNIX file I/O (as above, but excluding TTY handling).

UNIX Help : Commands and Tips

Eggdrop shell support

satexas.com

What are the basic commands in Unix to move around my account?

  • pwd [Tells you your current directory (in full)]
  • cd [Takes you to your HOME (starting directory)]
  • cd .. [Moves you backwards one directory]
  • cd /dir/dir [Moves you to a particular directory]

What are the basic Unix commands to copy & move files?

  • cp file file2 [opies the file to file2
  • mv file newfile [Moves, or renames, the file]
  • rm file [Removes the file permanently]
  • rm -rf file [Forces a removal of a file]
  • rm -rf dirname [Removes a directory, and all it's subdirectories]

Click to Read More

Unix for Advanced Users

Unix Workstation Support Group
Indiana University
http://www.uwsg.iu.edu/
These are notes to be used in conjunction with the "Unix for Advanced Users" course offered by the Unix Workstation Support Group at Indiana University. If you are interested in taking this class, please visit our registration page.
Introduction
Unix today is a mature operating system, and is used heavily in a large variety of scientific, engineering, and mission critical applications. Interest in Unix has grown substantially in recent years because of the proliferation of the Linux (a Unix look-alike) operating system. This section briefly describes the history of Unix, why it is considered a good operating environment, and the marriage between Unix and C.
History of Unix
The Unix operating system found its beginnings in MULTICS ( Multiplexed Operating and Computing System). The MULTICS project began in the mid 1960s as a joint effort by General Electric, Massachusetts Institute for Technology and Bell Laboratories. In 1969 Bell Laboratories pulled out of the project.
One of Bell Laboratories people involved in the project was Ken Thompson. He liked the potential MULTICS had, but felt it was too complex and that the same thing could be done in simpler way. In 1969 he wrote the first version of Unix, called UNICS. UNICS stood for Uniplexed Operating and Computing System. Although the operating system has changed, the name stuck and was eventually shortened to Unix.
Ken Thompson teamed up with Dennis Ritchie, who wrote the first C compiler. In 1973 they rewrote the Unix kernel in C. The following year a version of Unix known as the Fifth Edition was first licensed to universities. The Seventh Edition, released in 1978, served as a dividing point for two divergent lines of Unix development. These two branches are known as SVR4 (System V) and BSD.
Ken Thompson spent a year's sabbatical with the University of California at Berkeley. While there he and two graduate students, Bill Joy and Chuck Haley, wrote the first Berkely version of Unix, which was distributed to students. This resulted in the source code being worked on and developed by many different people. The Berkeley version of Unix is known as BSD, Berkeley Software Distribution. From BSD came the vi editor, C shell, virtual memory, Sendmail, and support for TCP/IP.
For several years SVR4 was the more conservative, commercial, and well supported. Today SVR4 and BSD look very much alike. Probably the biggest cosmetic difference between them is the way the ps command functions.
The Linux operating system was developed as a Unix look alike and borrows from both BSD and SVR4.

Unix FAQ/faq

faqs.org
Subject: When someone refers to 'rn(1)' ... the number in parentheses mean?
Date: Tue, 13 Dec 1994 16:37:26 -0500
1.2) When someone refers to 'rn(1)' or 'ctime(3)', what does the number in parentheses mean?
It looks like some sort of function call, but it isn't. These numbers refer to the section of the "Unix manual" where the appropriate documentation can be found. You could type "man 3 ctime" to look up the manual page for "ctime" in section 3 of the manual.
The traditional manual sections are:
  1. User-level commands
  2. System calls
  3. Library functions
  4. Devices and device drivers
  5. File formats
  6. Games
  7. Various miscellaneous stuff - macro packages etc.
  8. System maintenance and operation commands

Some Unix versions use non-numeric section names. For instance, Xenix uses "C" for commands and "S" for functions. Some newer versions of Unix require "man -s# title" instead of "man # title". Each section has an introduction, which you can read with "man # intro" where # is the section number. Sometimes the number is necessary to differentiate between a command and a library routine or system call of the same name. For instance, your system may have "time(1)", a manual page about the 'time' command for timing programs, and also "time(3)", a manual page about the 'time' subroutine for determining the current time.

You can use "man 1 time" or "man 3 time" to specify which "time" man page you're interested in. You'll often find other sections for local programs or even subsections of the sections above - Ultrix has sections 3m, 3n, 3x and 3yp among others.

Click to Read More

Unix Tutorials

Little Unix Programmers Group (LUPG)
The following set of tutorials reflects an effort to give Unix programmers and programmers wanna-be a chance to get familiar with various aspects of programming on Unix-like systems, without the need to buy an expensive set of books and spending a lot of time in understanding lots of technical material. The one assumption common to all tutorials (unless stated otherwise) is that you already know C programming on any system.
The general intention is to allow someone to get familiar with a subject rather quickly, so they can start experimenting with it, and allow them to read a more thorough user manual or reference manual after they got over the initial "fear". By no means will these tutorials suffice to turn anyone into a proficient professional, but one needs to start somewhere and then again, why not do it for free?
Tutorials Index (note - each tutorial may be browsed online, or downloaded as a .tar.gz archive). Size of each tutorial is given in ammount of screen-pages when viewed using the lynx text-based web browser (assuming 25 lines per page):
Unix Beginners
Compiling C/C++ Programs On Unix (archive) (~15 lynx pages)
Debugging With "gdb" (archive) (~11 lynx pages)
Automating Program Compilation Using Makefiles (archive) (~13 lynx pages)
Manipulating Files And Directories In Unix (archive) (~50 lynx pages)
Intermediate Level
Creating And Using C Libraries (archive) (~18 lynx pages)
Unix Signals Programming (archive) (~29 lynx pages)
Internetworking With Unix Sockets (archive) (~21 + ~44 lynx pages)
Accessing User Information On A Unix System (archive) (~38 lynx pages)
Graphics Programming
Basic Graphics Programming With The Xlib Library (archive) (~59 + ~44 lynx pages)
Advanced Topics
Unix And C/C++ Runtime Memory Management For Programmers (archive) (~69 lynx pages)
Parallel Programming - Basic Theory For The Unwary (archive) (~29 lynx pages)
Multi-Threaded Programming With The Pthreads Library (archive) (~60 lynx pages)
Multi-Process Programming Under Unix (archive) (~80 lynx pages)

Basic Introduction to UNIX/linux

By Claude Cantin
This course is intended for people not familiar with the UNIX/linux operating system, but familiar with other computer systems such as MS Windows, DOS or VMS. It is meant as an introduction for beginners to help them understand concepts behind the UNIX/linux operating system. Intermediate users may find the course useful as a refresher.
Up to 2003, most of the command examples used throughout the text were performed using a Silicon Graphics O, running IRIX 6.5. But since September 2003, the hands-on portion of the course is done using the linux (SuSE 8.2, then 9.0) operating system, which means most commands are now done with linux. SGI systems running IRIX, Sun systems running Solaris, Hewlett-Packards running HP/UX, IBM RS/6000s running AIX and most PCs (and other architectures) running linux use most of the commands described in this manual. They use the same basic commands, although some of the options used may vary slightly between the different architectures.
In specific cases, the book uses commands based on linux. The distribution used was SuSE version 7.3 and newer.
Notes:
  • This book refers to various UNIX derivatives running on ``workstations".
  • The author's definition of ``workstation" includes systems such as the Sun Microsystems SPARCstation family, the Silicon Graphics Personal IRIS, Indigo, Indigo, Power Series, Challenge, Power Challenge, Onyx, Power Onyx, Indy, O, Octane, Origin and Altix families, the IBM RS/6000 series, the HP 9000 model 700 and 800 families, the Compaq AXP families (systems running True UNIX), and 500+ MHz PCs running one of the linux distributions.
  • Although most sections refer to UNIX in general, some refer to a specific architecture. Others may refer to NRC-specific topics. Those sections are generally clearly indicated.
  • This book also refers to various linux distributions, notably SuSE 7.x and 8.x, and Red Hat 7.x and 8.x.

Click to Read More

Popular Posts

Linux Unix Books