Washington Area Informix Users Group

Home

About Us

Upcoming Events

Current Newsletter

Newsletter Archive

Presentations

Become a Member

Sponsors

Links

January 1998 Newsletter

Volume 8, No. 1


Highlights of This Issue

The Washington Area Informix User Group Email List

Forum 98 - A One Day Technical Conference for Informix Users

PeopleSoft / Informix Special Interest Group

Exploring the OnLine Command Line Utilities - ONSTAT, by Lester Knutsen

Background Processing and Shell Scripting for Informix, by Ron M. Flannery


Forum 98 - Solutions for Informix Users

The Washington Area Informix Users Group invites you to attend our fourth one-day Technical Forum. This will be an exciting event that includes technical presentations, practical training sessions, exhibits, demos, a public domain software diskette, and a chance to meet and network with other Informix database developers, programmers, DBAs, and users. See page 3 for details.

Location: Fairview Park Marriott, 3111 Fairview Park Drive, Falls Church, VA
Date: Friday - February 20, 1998 - 8:00 am to 5:00 pm
Register early -- space is limited! (See form on last page)


Informix DBA Certification Study Group

An Informix DBA Certification Study Group is getting started. The purpose of the Informix Certification Study Group is to learn about INFORMIX-OnLine and prepare for becoming an Informix Certified Professional. This is for members interested in taking the Informix certification exam. Details on the first meeting are as follows:

Location: Advanced DataTools Corporation, 4216 Evergreen Lane, Suite 136, Annandale, VA
Date: Saturday, January 10, 1998 from 10:00 am to approximately 12 noon or 1 pm
Please RSVP to 703-256-0267 ext. 4
Two additional meetings are tentatively planned for Saturday, January 17 and Saturday, January 24, based on participation. The location for the additional meetings will be decided at the first meeting. Peter Schmidt of Advanced DataTools has volunteered to coordinate the study group. After the third meeting we hope to be able to schedule a time when everyone in the study group can go take the test. Tests are conducted by Sylvan Prometric on behalf of Informix. There is a fee of $150.00 to take each test. We will review detailed info about this on the first meeting. If you have questions, please contact Peter Schmidt via email at pschmidt@advancedatatools.com.


The Washington Area Informix User Group Email List

The waiug-members email list is for announcements and discussions by members of the Washington (DC, USA) Area Informix User Group. The main goal of this email list is to share information among Informix users in the Maryland, Virginia, and Washington DC areas. This is a discussion place for user group members. However, you don't have to be a member of the user group to subscribe to this mailing list. The email list is sponsored by the International Informix User Group using the Majordomo mailing list software on the IIUG computer system. Seth Grimes has volunteered to manage the list.

Listed below are some key facts about the list. When the instructions say to use a particular Majordomo command, this means you should send the specified text in the body of an email message to "majordomo@iiug.org". Do not put commands in the Subject header line, as this line is ignored.

SUBSCRIBING -- The list is open to anyone who wishes to subscribe. To subscribe, use the Majordomo command "subscribe waiug-members" in the body of the text.

If your address on the list needs to be something other than the one that appears on the "From:" line of your outgoing email, use the Majordomo command "subscribe waiug-members your_address" where "your_address" is replaced by the address to which your list messages should be sent. Note that this request will be routed to the list manager for approval, so a subscription of this type may not take effect immediately.

POSTING MESSAGES -- You must be a subscriber to post to this list. Majordomo enforces this restriction by checking to see that the From: address of the posted message matches one of the subscriber addresses. This restriction may cause problems at a site that subscribes using an email alias that explodes list messages to several local users. Individual users at such a site will not be able to post to the list unless they can make their messages appear to come from the local alias.

Newsletter Sponsorship

The user group has been supported by many companies over the years. If your company would like to sponsor the newsletter, please call for more information. We would like to thank the following companies for sponsoring this issue:

Advanced DataTools Corporation


This newsletter is published by the Washington Area Informix User Group.
4216 Evergreen Lane, Suite 136, Annandale, VA 22003 Phone:703-256-0267
  • President/Editor: Lester Knutsen 703-256-0267
  • Membership: John Petruzzi 703-490-4598
  • Treasurer/Secretary: Sam Hazelett 703-277-6882
  • Programs/Sponsors: Nick Nobbe 202-707-0548
For more Information: 703-256-0267
Web Page: http://www.iiug.org/~waiug


Washington Area
Informix User Group
Forum 98 - Solutions for Informix Users
The Washington Area Informix Users Group invites you to attend our fourth one-day Technical Forum. This will be an exciting event that includes technical presentations, practical training sessions, exhibits, demos, a public domain software diskette, and a chance to meet and network with other Informix database developers, programmers, DBAs, and users.
Location: Fairview Park Marriott, 3111 Fairview Park Drive, Falls Church, VA
Date: Friday - February 20, 1998 - 8:00 am to 5:00 pm
At the last Forum we had over 230 participants, 16 speakers and 14 exhibitors. Participants said they learned more practical information in the Forum than in any other event. Invitations for this Forum are being sent to over 3,000 Informix users. We are planning the following sessions and exhibits:
Keynote: Dr. Michael R. Stonebraker

Dr. Michael R. Stonebraker is Chief Technology Officer of Informix Software and was co-founder and chief technology officer at Illustra. A noted expert in database management systems, operating systems, and expert systems, Dr. Stonebraker is Professor Emeritus of Computer Science at the University of California at Berkeley. Illustra represented the commercialization of Dr. Stonebraker's POSTGRES research project on the UC Berkeley campus. Dr. Stonebraker founded Ingres Corporation in 1980. Dr. Stonebraker recently authored the book entitled "Object-Relational DBMSs: The Next Great Wave."

Technical Sessions

  • Approaches to GUI Using Informix 4GL
  • Architectures of the Future
  • Using Case Tools with INFORMIX-Universal Server
  • Data Warehouse & Informix Case Study
  • INFORMIX-OnLine Performance
  • Informix SQL Optimization
  • Using Java with Informix
  • New Features in INFORMIX-OnLine 7.3
  • Using INFORMIX-OnLine on Windows NT
  • Migrating to Informix DSA 7.X
  • Using IUS DataBlade Object Oriented Features in Traditional Database Applications
  • Web Database Development
Current Exhibitors
  • Advanced DataTools Corporation
  • CrossZ Software
  • SCH Technologies
  • BMC Software
  • FuGEN Technologies
  • Silverrun Technologies, Inc.
  • Brio Technology
  • Informix Software, Inc.
  • Space Works, Inc.
  • Business Objects
  • Intraware, Inc.
  • Summit Data Group
  • Cognos Corporation
  • IQ Software
  • Symantec Internet Tools
  • Compuware Corporation
  • Logic Works, Inc.
  • Technology Investments, Inc.
Lunch sponsored by Informix

Registration is open to everyone for $30. Registration includes the Forum, a diskette with user group public domain software, and lunch. A final schedule and reminder will be faxed or mailed to all registrants. Please contact John Petruzzi, Membership Director, to register, at 703-490-4598, or send in the form on the back of this newsletter.

See our web page for the latest details: http://www.iiug.org/~waiug/
Register early as space is limited!

PeopleSoft / Informix SIG

The second PeopleSoft/Informix SIG (a.k.a. PeopleMix) meeting was held on November 18, 1997, at the Choice Hotel's headquarters in Silver Spring, MD. It was exciting to see that the number of attendees almost doubled from the first SIG meeting in June. The SIG President, Sam Kneppar, opened the meeting by asking the 38 attendees (30 customers and 8 vendors) to introduce themselves. Some attendees came as far as Richmond VA, Ithica NY, and Austin TX.

The first presenter, Joey Annan from PeopleSoft Technology Services, presented 'Planning for a Three-Tier Architecture with PeopleSoft Release 7 and Beyond.' His presentation included: the underlying architecture, platform support, PeopleSoft client and web deployment, Tuxedo as middleware, scalability features, memory-based caching (versus file-based caching), and the authentication service.

The next presenter, Kevin Fennimore from UCI Consulting, presented 'Informix - Online Dynamic Server Plans.' His presentation included: 'RAS' (Reliability, Availability, Serviceability), new features for Informix 7.3, nVision patches for PeopleSoft, new optimization techniques, and the tremendous impact on run times for nVision reports (e.g., reducing run times from 4 hours to 5 minutes!).

The final presenters, Janice Callahan and Michael Garner from Optimal Networks, presented 'Optimal Networks - Application Expert Performance Analysis Tool.' Their presentation highlighted their products Optimal Performance and Application Expert. Janice and Michael discussed how the Optimal Networks approach toward client server WAN application performance removes the finger pointing and departmental politics from multi-technology application performance.

The PeopleMix SIG plans to meet quarterly and new members are always welcome. This SIG is instrumental in establishing a network of contacts thereby allowing members to become familiar with the benefits and challenges of implementing and maintaining Informix and PeopleSoft. The SIG provides meaningful presentations and promotes group affiliation.

For additional information, feel free to contact either Nadia Skiscim-Informix 703-847-3323 (nadias@informix.com) or Sam Kneppar-SIG President 202-283-5650 (Sam.Kneppar@ccmail.irs.gov).


Exploring the Informix OnLine Utility - ONSTAT
by Lester Knutsen

INFORMIX-OnLine DSA comes with a set of powerful command line utilities that enable you to monitor, tune, and configure your database server. This article will focus on one of these utilities, ONSTAT, and present ways to use it to optimize your performance as a database administrator. Future articles will explore the other utilities.

ONSTAT - shows OnLine server statistics

ONSTAT is the command line utility that gets the most usage. It reads OnLine’s shared memory structures and provides lots of useful information about the state of your server. It does not place any locks on shared memory structure and uses very little overhead, so you can use it at any time. The information is current at the time the command is issued, and the data can change as you are using the command.

There are more options for ONSTAT than any other Informix utility. Many of the options are debugging parameters that are not well documented and don’t make sense to the average DBA. There are also many very useful options that help you manage your OnLine server. We will focus on the more useful options in this chapter. For a complete list of the syntax and all the options, see Figure 1. In INFORMIX-OnLine DSA the ONSTAT command has been greatly enhanced with a new set of monitoring and debugging options. These options all start with ‘-g’ and are listed in Figure 2.

Figure 1: ONSTAT syntax and options 


onstat [-abcdfghklmpstuxzBCDFRX][-I] [-r seconds] [-o file] [infile]
    -a Print all information
    
    -b Print buffers
    
    -c Print configuration file
    
    -d Print DBspaces and chunks
    
    -f Print dataskip status
    
    -g New Monitoring subcommands (default: all). See Figure 2 for all options
    
    -i Interactive mode
    
    -k Print locks
    
    -l Print logging
    
    -m Print message log
    
    -p Print profile
    
    -s Print latches
    
    -t Print TBLspaces
    
    -u Print user threads
    
    -x Print transactions
    
    -z Zero profile counts
    
    -B Print all buffers
    
    -C Print btree cleaner requests
    
    -D Print DBspaces and detailed chunk stats
    
    -F Print page flushers
    
    -R Print LRU queues
    
    -X Print entire list of sharers and waiters for buffers
    
    -r Repeat options every n seconds (default: 5)
    
    -o Put shared memory into specified file (default: onstat.out)
    
    infile Use infile to obtain shared memory information
    
    -  Displays OnLine mode
Figure 2: ONSTAT new options (-g) syntax 
onstat -g [options from list below]
    all Print all MT information
    
    ath Print all threads 
    
    wai Print waiting threads
    
    act Print active threads
    
    rea Print ready threads
    
    sle Print all sleeping threads
    
    spi Print spin locks with long spins
    
    sch Print VP scheduler statistics
    
    lmx Print all locked mutexes
    
    wmx Print all mutexes with waiters
    
    con Print conditions with waiters
    
    stk <tid> Dump the stack of a specified thread
    
    glo Print MT global information
    
    mem <pool name|session id> Print pool statistics
    
    seg Print memory segment statistics
    
    rbm Print block map for resident segment
    
    nbm Print block map for non-resident segments
    
    afr <pool name|session id> Print allocated pool fragments
    
    ffr <pool name|session id> Print free pool fragments
    
    ufr <pool name|session id> Print pool usage breakdown
    
    iov Print disk IO statistics by vp
    
    iof Print disk IO statistics by chunk/file
    
    ioq Print disk IO statistics by queue
    
    iog Print AIO global information
    
    iob Print big buffer usage by IO VP class 
    
    ppf [<partition number> | 0] Print partition profiles
    
    tpf [<tid> | 0] Print thread profiles
    
    ntu Print net user thread profile information
    
    ntt Print net user thread access times 
    
    ntm Print net message information
    
    ntd Print net dispatch information 
    
    nss <session id> Print net shared memory status
    
    nsc <client id> Print net shared memory status
    
    nsd Print net shared memory data
    
    sts Print max and current stack sizes
    
    dic Print dictionary cache information
    
    qst Print queue statistics
    
    wst Print thread wait statistics
    
    ses <session id> Print session information
    
    sql <session id> Print sql information
    
    dri Print data replication information
    
    pos Print /INFORMIXDIR/etc/.infos.DBSERVERNAME file
    
    mgm Print mgm resource manager information
    
    ddr Print DDR log post processing information


There are too many commands to cover all of them in this chapter. Instead we will focus on the key commands and the ones that are most useful to a DBA in monitoring your server.

Current status of OnLine: onstat -

The command "onstat -" prints out a one line message indicating the current status of your server. This is a quick way to get a status update. Figure 3 has an example output.

Figure 3: Current status: onstat - 


lester@merlin >onstat -

INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 11:54:44 -- 10656 Kbytes


This tells the INFORMIX-OnLine version, what mode the server is in, how long it has been up and running, and how much memory it is using. If the server was down, you will get an error message saying "shared memory not initialized" like the one in Figure 4.

Figure 4: Current status when OnLine is down 


lester@merlin >onstat -

shared memory not initialized for INFORMIXSERVER 'merlindb713'

lester@merlin >


Database server profile: onstat -p

The "-p" displays the basic I/O and performance profile of your system. Figure 5 contains example output. These statistics are since the server was last rebooted, or when the statistics were last reset with the "onstat -z" option.

Figure 5: Server profile: onstat -p 


lester@merlin >onstat -p

INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:03:37 -- 10656 Kbytes

Profile

dskreads pagreads bufreads %cached dskwrits pagwrits bufwrits %cached

215537   2098656  5208789  95.86   178116   179527   2883605  93.82



isamtot open start   read   write rewrite delete commit rollbk

6955097 2172 2858 2312158 2305564 129     43     22238  0



ovlock ovuserthread ovbuff usercpu syscpu numckpts flushes

0      0            168    6625.92 722.70 35       4320



bufwaits lokwaits lockreqs deadlks dltouts ckpwaits compress seqscans

440      0        2331106  0       0       29       5        66



ixda-RA idx-RA da-RA RA-pgsused lchwaits

836     0      16    833        36


Some of the key elements of this option are:

Reads %cached - This is the percent of your reads that are using OnLine’s buffers instead of accessing disk drives because the records are already in memory. The goal is to have 95% of your reads come from the buffers. If this number is below 95% you may need to increase the BUFFERS parameter in your ONCONFIG file.

Writes %cached - This is the percent of writes that are using your buffers. The goal here is to have 85% or more of your write activity use the buffers. The one exception is during large data loads. The BUFFERS parameter in your ONCONFIG file will effect this value. Be careful when you increase the BUFFERS parameter - if you make the BUFFERS too large this will take memory away from other processes and may slow down your whole system. As you increase BUFFERS, you need to monitor swapping and paging of your operating system.

ovlock - This should be zero. Any other number indicates you have run out of locks since the system was last reset. Increase the LOCKS parameter in the ONCONFIG file.

ovuserthread - This should be zero. This value is increased each time a user tries to connect and the number of current users exceeds the maximum number of user threads set in the ONCONFIG file. The maximum number of user threads is the third value of the NETTYPE parameter in the ONCONFIG file.

ovbuff - This should be zero. This value is increased every time OnLine tries to acquire more buffers than are set by the BUFFER parameter in the ONCONFIG file.

bufwaits - This should be zero. This indicates the number of times a user thread has waited for a BUFFER.

lokwaits - This should be zero. This indicates the number of times a user thread has waited for a LOCK.

deadlks - This should be zero. This indicates the number of times a deadlock was detected and prevented.

dltouts - This should be zero. This indicates the number of times a distributed deadlock was detected.

Display message log file: onstat -m

This option displays the last lines of the OnLine message log. This is the message file that contains all messages about your server and is a key component of your system to monitor. Figure 6 contains an example. This is a quick way to see the last 20 messages in your log file.

Figure 6: Display message log file: onstat -m 


lester@merlin >onstat -m
INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:41:12 -- 10656 Kbytes
Message Log File: /u3/informix7/online1.log

21:05:15 Checkpoint Completed: duration was 8 seconds.

21:05:43 Checkpoint Completed: duration was 6 seconds.

21:10:58 Checkpoint Completed: duration was 7 seconds.

21:16:06 Checkpoint Completed: duration was 7 seconds.

21:21:13 Checkpoint Completed: duration was 7 seconds.

21:26:20 Checkpoint Completed: duration was 7 seconds.

21:31:28 Checkpoint Completed: duration was 7 seconds.

21:36:36 Checkpoint Completed: duration was 8 seconds.

21:41:43 Checkpoint Completed: duration was 7 seconds.

21:46:51 Checkpoint Completed: duration was 8 seconds.

21:52:00 Checkpoint Completed: duration was 9 seconds.

21:57:09 Checkpoint Completed: duration was 8 seconds.

22:00:42 Logical Log 20 Complete.

22:00:43 Process exited with return code 1: /bin/sh /bin/sh -c /u3/informix7/log_full.sh 2 23 "Logical Log 20 Complete." "Logical Log 20 Complete."

22:02:17 Checkpoint Completed: duration was 8 seconds.


Note: I like to have the OnLine log file always display in one of my windows on screen. The trick to doing this is to use the UNIX "tail" command with the "-f" option. This continually reads the last lines of a file as it is appended to. On my system I run the following command to continually monitor this log: tail -f $INFORMIXDIR/online.log User status: onstat -u

The ONSTAT option to monitor what your users are doing is"-u". Figure 7 shows example output from this command. The key field is "sessid". This identifies the users session ID that OnLine uses to track the user internally. This is the number you need to know if you need to kill a user’s session. (See "onmode -z" later in this chapter)

Figure 7: User status: onstat -u 


lester@merlin >onstat -u

INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:52:40 -- 10656 Kbytes

Userthreads

address flags   sessid  user    tty wait        tout    locks   nreads  nwrites

a2d0018 ---P--D 1       informix -  0           0       0       84      250

a2d0458 ---P--F 0       informix -  0           0       0       0       177701

a2d0898 ---P--B 8       informix -  0           0       0       0       0

a2d1558 ---P--D 12      informix -  0           0       0       0       0

a2d2218 Y--P--- 264     lester      4 a35f190   0       1       35063   14836

5 active, 128 total, 17 maximum concurrent


The "flags" column gives you some idea of what a user is doing. The following are the critical flags, based on position within the flag field:

Flags in position 1

B - Waiting on a buffer

C - Waiting on a checkpoint

G - Waiting on a logical log buffer write

L - Waiting on a lock

S - Waiting on a mutex

T - Waiting on a transaction

Y - Waiting on a condition

X - Waiting on a transaction rollback
Flags in position 2
* - Transaction active during I/O error
Flags in position 3
A - Dbspace backup thread

B - Begin work

P - Prepared for commit work

X - TP/XA prepeared for commit work

C - Committing work

R - Rolling back work

H - Heuristically rolling back work
Flags in position 4
P - Primary thread for a session
Flags in position 5
R - Reading call

X - Transaction is committing
Flags in position 6
None
Flags in position 7
B - Btree cleaner thread

C - Cleanup of terminated user

D - Daemon thread

F - Page flusher thread

M - ON-Monitor user thread
Logical Logs status: onstat -l

The "-l" option to ONSTAT displays the current status of your logical logs. Figure 8 shows an example display. One problem with this display is that it does not really show you which logs are ready to be reused. In the 5.X versions of OnLine, as soon as a log was backed up and had no open transactions it would be marked as free with an "F" in the flags column. In the current versions of OnLine, logs are not marked as free until right before they need to be reused. (See chapter 26 for an SMI script to show logs that really are free.) One way of using ONSTAT to tell which logs can be reused is to use "onstat -l" with "onstat -x" to display all active sessions. See the next section on "onstat -x".

Figure 8: Logical Logs status: onstat -l 


Physical Logging

Buffer bufused bufsize numpages numwrits pages/io

P-1    0       16      236      60       3.93

phybegin physize phypos phyused %used

10003f   1000    967    0       0.00



Logical Logging

Buffer bufused bufsize numrecs numpages numwrits recs/pages pages/io

L-3    0       16      90303   1522     275      59.3       5.5



address number  flags   uniqid  begin  size used %used

a1ee3e4 1       U-B---- 13      100427 500  500  100.00

a1ee400 2       U-B---- 14      10061b 500  500  100.00

a1ee41c 3       U-B---- 15      10080f 500  500  100.00

a1ee438 4       U-B---- 16      100a03 500  500  100.00

a1ee454 5       U-B---- 17      100bf7 500  432   86.40

a1ee470 6       U-B---- 18      100deb 500  500  100.00

a1ee48c 7       U-B---- 19      100fdf 500  500  100.00

a1ee4a8 8       U-B---- 20      1011d3 500  500  100.00

a1ee4c4 9       U---C-L 21      1013c7 500  23     4.60

a1ee4e0 10      U-B---- 10      1015bb 500  500  100.00

a1ee4fc 11      U-B---- 11      1017af 500  500  100.00

a1ee518 12      U-B---- 12      1019a3 500  500  100.00


The flags column provides status information about each log. The flags are:
A - Newly added, must run an archive before they can be used

B - Backed up to tape or "/dev/null"

C - Current logical log file

F - Free and available for use. You will rarely see this flag as logs are not marked as free until right before they are needed.

L - Last checkpoint is in this logical log

U - Used logical log, it may be free if it is backed up and contains no active transactions.
Display transactions: onstat -x

This option displays all current transactions. The most useful column is "log begin". This tells you in which logical log a transaction started. This may be used with the "onstat -l" command to determine which logs are free and may be reused. Find the earliest logical log number in the column "log begin". This tells you which logical log has the earliest active transaction. Any logical logs that are backed up before the log with the earliest transaction will be automatically reused by OnLine.

Figure 9: Transactions status: onstat -x 


lester@merlin >onstat -x



INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 01:21:38 -- 10656 Kbytes



Transactions

address flags userthread        locks           log begin       isolation       retrys coordinator

a2f4018 A---- a2d0018   0       0               COMMIT          0

a2f413c A---- a2d0458   0       0               COMMIT          0

a2f4260 A---- a2d0898   0       0               COMMIT          0

a2f4384 A---- a2d1118   0       0               NOTRANS         0

a2f44a8 A---- a2d1558   0       0               COMMIT          0

a2f45cc A-B-- a2d1118   2       21              NOTRANS         0

6 active, 128 total, 7 maximum concurrent


Display locks: onstat -k

The "onstat -k" option will display all active locks. Watch out, this display could be long. If you have a large number of LOCKS defined in your ONCONFIG file and many users you could see thousands of rows from this command. Figure 10 is an example of the display.

Figure 10: Display all locks: onstat -k 


lester@merlin >onstat -k



INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:53:31 -- 10656 Kbytes



Locks

address wtlist owner   lklist type  tblsnum rowid key#/bsiz

a103e74 0      a2d2218 0      HDR+S 100002  20a      0

1 active, 20000 total, 16384 hash buckets


Who owns a lock?

The "owner" column lists the address in shared memory of the user who owns a lock. Use this with "onstat -u" to see all users, and compare this with the "address" column to identify username of the owner.

What table is locked?

The "tblsnum" column identifies the table that is being locked. Compare this with the output of the following SQL statement to convert a table’s partnum to hex. This will identify which table is locked.

select tabname, hex(partnum) tblsnum 

from systables where tabid > 99;
This SQL statement will provide you with a list of tables and their associated tblsnum to identify which table has a lock placed on it. Figure 11 contains an example of how to identify which table is locked.

Figure 11: What table is locked 


1. Find a list of tblsnum

dbaccess database - <<EOF

select tabname, hex(partnum) tblsnum 

from systables where tabid > 99;

EOF



database selected



tabname                 tblsnum

genjournal      0x0010009E

gjsum           0x0010009F
2. Find what is locked
onstat -k



INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 01:47:38 -- 10656 Kbytes



Locks

address wtlist owner   lklist  type  tblsnum rowid key#/bsiz

a103e44 0      a2d1118 a103de4 HDR+X 10009f  0        0

3 active, 20000 total, 16384 hash buckets
3. Compare tblsnum from step 1 and step 2. This identifies the table gjsum as the one that is locked.

The tblsnum 100002 has a special meaning. This indicates a database lock. Every user who opens a database will place a shared lock on the database.

Types of locks

The following list the types of locks and how to identify them.

Database        - Lock is placed on tablespace 1000002

Table   - Lock is placed on actual tablespace with rowid of 0

Page    - Lock is placed on tablespace with rowid ending in 00

Row     - Lock is placed on tablespace with actual rowid (not 00)

Byte    - Lock is placed on tablespace/page with size of bytes

Key     - Lock is placed on tablespace hex rowid (starting with f)
 

Lock type flags

The following lists the lock flags in the "flags" column of "onstat -k":

HDR     - Header

B       - Bytes lock

S       - Shared lock

X       - Exclusive

I       - Intent

U       - Update

IX      - Intent-exclusive

IS      - Intent-shared

SIX     - Shared, Intent-exclusive
Dbspaces and chunks status: onstat -d

This ONSTAT command shows two very important items - the layout of your dbspaces and disk chunks, and the status of each chunk and dbspace. Print the output of this command and save it. You will need it if you ever have to perform a restore. This identifies each dbspace and chunk that you need to rebuild your system. Figure 12 contains an example output from one of my training systems.

Figure 12: Dbspaces and chunk status: onstat -d 


lester@merlin >onstat -d

INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:54:44 -- 10656 Kbytes



Dbspaces

address number  flags   fchunk  nchunks flags   owner           name

a2ce100 1       1       1       1       N       informix        rootdbs

a2ce508 2       1       2       1       N       informix        dbspace1

a2ce578 3       1       3       1       N       informix        dbspace2

a2ce5e8 4       1       4       1       N       informix        dbspace3

4 active, 2047 maximum



Chunks

address chk/dbs         offset  size    free    bpages flags pathname

a2ce170 1    1  0       250000  62047           PO-  /u3/dev/rootdbs1

a2ce280 2    2  0       10000   9587            PO-  /u3/dev/dbspace1

a2ce358 3    3  0       10000   9947            PO-  /u3/dev/dbspace2

a2ce430 4    4  0       10000   9947            PO-  /u3/dev/dbspace3

4 active, 2047 maximum


This display also shows how much free space each chunk has, and the status of each chunk.

The "flags" for Dbspaces are:

Position 1

M - Mirrored Dbspace

N - Not Mirrored Dbspace
Position 2
X - Newly mirrored

P - Physical recovery underway

L - Logical recovery underway

R - Recovery underway
Position 3
B - Blobspace
The "flags" for Chunks are:

Position 1

P - Primary

M - Mirror
Position 2
O - On-line

D - Down 

X - Newly mirrored

I - Inconsistent
Position 3
B - Blobspace

- - Dbspace

T - Temporary Dbspace
Dbspaces and chunks I/O: onstat -D

The "onstat -D" option shows I/O by chunk. This is very helpful in performance tuning. Your goal is to spread your reads and writes evenly across all chunks. Figure 13 shows an example where one chunk is utilized for all I/O, and all other chunks are inactive. The I/O is not spread out among chunks, which is not an effective use of disk.

Figure 13: Dbspaces and chunks IO: onstat -D 


lester@merlin >onstat -D



INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:55:09 -- 10656 Kbytes



Dbspaces

address number  flags   fchunk  nchunks flags   owner    name

a2ce100 1       1       1       1       N       informix rootdbs

a2ce508 2       1       2       1       N       informix dbspace1

a2ce578 3       1       3       1       N       informix dbspace2

a2ce5e8 4       1       4       1       N       informix dbspace3

4 active, 2047 maximum
Chunks

address chk/dbs offset  page Rd page Wr pathname

a2ce170 1    1  0       36563   179558  /u3/dev/rootdbs1

a2ce280 2    2  0       3       0       /u3/dev/dbspace1

a2ce358 3    3  0       2       0       /u3/dev/dbspace2

a2ce430 4    4  0       2       0       /u3/dev/dbspace3

4 active, 2047 maximum
  

Page write status: onstat -F

There are three ways OnLine writes pages from shared memory buffers to disk. Foreground writes occur when OnLine needs a buffer and must interrupt processing to flush buffers to disk to free a buffer. These are the least desirable type of writes. Background writes (LRU Writes) occur when a set percent of the buffers are dirty. This is controlled by the LRU parameters in the ONCONFIG file. These do not interrupt user processing and are the best for interactive systems. Chunk writes occur at checkpoints, and all dirty buffer pages are written to disk. The more dirty pages, the longer a checkpoint will take. Checkpoint writes are sorted and optimized, but the longer a checkpoint is, the longer it will block user activity. Checkpoint writes are best for batch systems. The ONSTAT option to monitor this is "-F". The goal should be to see zero

foreground writes (Fg Writes). Figure 14 contains an example.

Figure 14: Page writes status: onstat -D 


lester@merlin >onstat -F



INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:55:32 -- 10656 Kbytes



Fg Writes LRU Writes    Chunk Writes

168     172280          5277



address flusher state   data

a2d0458 0       I       0       = 0X0

        states: Exit Idle Chunk Lru
  

New monitoring and debugging commands (version 7.X): onstat -g

The "-g" commands are a whole new subset of commands in OnLine version 7. Figure 2 (earlier in this chapter) contains a list of all the -g commands. This section will discuss the most interesting of these.

List all threads: onstat -g ath

This option lists all active threads. Figure 15 shows an example.

Figure 15: List all active threads: onstat -g ath 


INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:56:09 -- 10656 Kbytes



Threads:

tid tcb           rstcb   prty  status                  vp-class name

2   a336b70 0     2     sleeping(Forever)       3lio     lio vp 0

3   a336dd0 0     2     sleeping(Forever)       4pio     pio vp 0

4   a337088 0     2     sleeping(Forever)       5aio     aio vp 0

5   a337340 0     2     sleeping(Forever)       6msc     msc vp 0

6   a337af8 0     2     sleeping(Forever)       7aio     aio vp 1

7   a337e00 a2d0018 4   sleeping(secs: 1)       1cpu     main_loop()

8   a34ab48 0     2     running                 1cpu     sm_poll

9   a34b770 0     2     running                 8tli     tlitcppoll

10  a34bce0 0     2     sleeping(Forever)       1cpu     sm_listen

11  a3c4a28 0     2     sleeping(secs: 2)       1cpu     sm_discon

12  a3c4e58 0     3     sleeping(Forever)       1cpu     tlitcplst

13  a3d0680 a2d0458 2   sleeping(Forever)       1cpu     flush_sub(0)

14  a3d0e40 a2d0898 2   sleeping(secs: 8)       1cpu     btclean

30  a35ea58 a2d1558 4   sleeping(secs: 1)       1cpu     onmode_mon

283 a39ef38 a2d2218 2   cond wait(sm_read)      1cpu     sqlexec

List Virtual Processor status: onstat -g sch

This option provides the means to identify which "oninit" UNIX process corresponds to which OnLine server Virtual Processor. When you perform a "ps -ef" on UNIX you will see many "oninit" process running. Each one is performing a specific task for the database server. Use the UNIX pid column from "ps -ef" to correlate a process to the pid column from "onstat -g sch". Figure 16 contains example output of this command.

Figure 16: Virtual Processor status: onstat -g sch 


lester@merlin >onstat -g sch



INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:56:46 -- 10656 Kbytes



VP Scheduler Statistics:



vp pid class    semops busy waits      spins/wait

1  230 cpu      21              0               0

2  231 adm      0               0               0

3  232 lio      277             0               0

4  233 pio      62              0               0

5  234 aio      144794          0               0

6  235 msc      756             0               0

7  236 aio      64028           0               0

8  237 tli      3               0               0
  

List SQL statement types: onstat -g sql

This is the most interesting of the new options. This option allows you to drill down and see the actual SQL statement that a user is executing. Figure 17 shows an example of listing a summary of all the SQL statements running. Then, by using the session id, you can see details and the actual SQL statements being run. Figure 18 contains an example of this detail.

Figure 17: List all SQL statements: onstat -g sql 


lester@merlin >onstat -g sql



INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:52:02 -- 10656 Kbytes



Sess SQL       Current  Iso Lock           SQL   ISAM F.E.

Id   Stmt type Database Lvl Mode     ERR   ERR  Vers

264  INSERT    ffsdw    NL  Not Wait -264   0   7.23


List SQL statement for a specific user: onstat -g sql sid

Figure 18: SQL statement of a user: onstat -g sid 


lester@merlin >onstat -g sql 264



INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:51:10 -- 10656 Kbytes



Sess SQL       Current  Iso Lock     SQL  ISAM F.E.

Id   Stmt type Database Lvl Mode     ERR  ERR  Vers

264  INSERT    ffsdw    NL  Not Wait -264  0   7.23



Current SQL statement :

insert into gjsum select exp_org, exp_prog, bud_obj_code, job_num,

sum (exp_amount) from genjournal group by 1, 2, 3, 4



Last parsed SQL statement :

insert into gjsum select exp_org, exp_prog, bud_obj_code, job_num,

sum (exp_amount) from genjournal group by 1, 2, 3, 4


This option is very useful in a couple of cases. One is when you do not have access to the SQL code and need to optimize your database tables and indexes. By running this command repeatedly, you can see the SQL statements that are processed. Then, by collecting and examining the SQL, you can determine where to add indexes to improve the performance of the system.

A second use for this option is to debug program transactions. I used this to help a programer debug his program by running "onstat -g sql sid" while he was running his program. I could see error conditions and SQL errors that he was not catching in his program.

List users sessions: onstat -g ses

This option shows additional information about users’ sessions, including how much memory each session is using. Figure 19 shows an example. This option can also be used to display detailed information about a session.

Figure 19: List users sessions: onstat -g ses 


lester@merlin >onstat -g ses



INFORMIX-OnLine Version 7.23.UC1 -- On-Line -- Up 7 days 12:57:48 -- 10656 Kbytes



session                                         #RSAM   total   used

id  user     tty        pid      hostname       threads memory  memory

265 informix -  0       -               0       8192    4680

264 lester   4  4249    merlin          1       106496  97840

10  informix -  0       -               0       8192    4680

7   informix -  0       -               0       16384   13144

6   informix -  0       -               0       8192    4680

4   informix -  0       -               0       16384   13144

3   informix -  0       -               0       8192    4680

2   informix -  0       -               0       8192    4680

Repeat ONSTAT commands: -r

To continually repeat an ONSTAT command use the "-r # of seconds" option. This is very useful when you need to monitor a situation. The following example displays the status of the logical logs every 10 seconds.

onstat -l -r 10
Clear ONSTAT shared memory statistics: onstat -z

The OnLine statistics are reset every time OnLine is restarted. To reset all the statistics while OnLine is running, without shutting it down, use the following command:

onstat -z
This will clear all statistics for ONSTAT and the SMI tables.

Conclusion

These utilities provide a powerful toolkit for the DBA to care for and monitor a database server. Articles in future issues of this newsletter will explore the other OnLine utilities. My web site also has an article covering the eight basic OnLine utilities.
 

Lester Knutsen, Advanced DataTools Corporation
Phone: 703-256-0267
Web: www.advancedatatools.com
Email: lester@advancedatatools.com
 
Background Processing and Shell Scripting For Informix
by Ron M. Flannery

Background Processing

One of the powerful features of Unix is background processing. You can run commands in the background and continue working or create commands that always run at particular times. This is referred to as "batch processing" in many computing environments. Unix provides many different ways to do this.

Using cron

cron is the Unix command that is used to schedule recurring tasks. Commands are stored in a file specific to your user id and can be scheduled to run on a certain month, day, week day, hour, and minute. To set up a cron job, do the following:

1. Edit your crontab file. The easiest way to do this is to type "crontab -e" from the command line. Your default text editor (usually vi) will open with your crontab file. The crontab -e option is not available on all Unix systems. If not available, you can do the following in place of steps 1 and 2:

1. Type the following from the Unix command line: crontab -l > filename

2. Edit filename, make your changes, save, and exit.

3. Type the following from the Unix command line: crontab filename Be very careful with your editing of the crontab file. This command will overwrite your existing crontab.

The crontab file can be hard to understand at first glance. Table 1 explains the fields in a crontab file. You can enter a number in any of the first five fields to indicate a value for the field. For example, to run the command at 30 minutes past each hour, enter 30 in the minutes field. An asterisk in any field means to not set a specific value for the field. To get more details, type "man crontab" from the Unix command line. The following example runs the command "who >> whoaudit.out" at 30 minutes past each hour of the day.
      30 * * * * who >> whoaudit.out
2. Save the crontab file and exit your editor.

3. Type "crontab -l" to verify your changes were saved.

4. The process will run at the specified time.

Table 1. Understanding the crontab file. 


Field # Description
1         Minute (0-59)
2         Hour (0-23)
3         Day of month (1-31)
4         Month (1-12)
5         Day of week (0-6, 0 is Sunday)
6         The command you want to run
More examples:
    * 1 * * 0 update_db_stats.sh       # runs update_db_stats.sh every Sunday at 1 AM.
* 1 1 * * run_monthly_reports.sh   # runs monthly_reports.sh on the 1st of every month at 1AM.
30 6 * * * reboot_system.sh        # runs reboot_system.sh at 6:30 AM every day.
Using nohup and ampersand (&)

The nohup and ampersand (&) commands are the easiest way to do background processing. When the ampersand is given at the end of any command line, that command is immediately started in the background. For example, the command "dbaccess stores upd_query1.sql &" will immediately run the query upd_query1.sql in the background. The results of the query will be displayed on your screen.

If you want to place the output of a background command in a file, you can use "nohup." The nohup command serves two purposes:

1. To place the output of the command(s) into a file. The default filename is nohup.out.

2. To keep the process running if you terminate your Unix session. If you use ampersand without "nohup," the process will be terminated if you exit from Unix.

You can redirect the output of your nohup commands into a file other than "nohup.out." To do this, use the >, >>, and 2> notations. Examples of each notation are found in Listing 1.

Listing 1. Using the nohup command..

nohup dbaccess stores run_query.sql &    # runs "dbaccess run_query.sql,"
         # placing output in nohup.out
nohup run_queries.sh >r.out 2>&1         # runs run_queries.sh, placing both error
                                         # standard out in r.out, overwriting existing r.out.
nohup run_queries.sh >> r.out 2>&1 &     # runs run_queries.sh, placing both error and
                                         # standard output in r.out, appending to contents of r.out
In Unix, the notation "2>" is used to redirect the results of errors. This is also known by the Unix file handle "stderr." This is different than the standard output or "stdout" file handle, which is represented by the ">" notation. You can use "2>" to redirect errors to a file (e.g., 2>file.out) or to the same destination as standard output with "2>&1." Thus, in the last two examples in Listing 1, both error and standard output are captured in the file r.out. The "2>&1" notation is much simpler in that you only need to look one place for the results of your commands.

Using at

The at command allows you to submit the specified command at a certain time. Unlike cron, at is designed to be done one time only. The syntax of at is:

    at time [< command]
where the command in brackets is optional. The results of the at command will be E-mailed to you unless you redirect output of command..

Simply typing "at time" will put you into an interactive mode. Type the commands you want to run. After you have typed the last command, press Ctrl+D and the command will be submitted. If you want to submit a shell script to run at time, use the "<" then convention as in:

    at 9pm < run_queries.sh
This will submit a job to run the run_queries.sh script at 9:00 PM.

Type "man at" for details on how to use "at" on your system.

Using batch

The batch command is very similar to at: It allows you to type commands or use the "<" notation to run an existing shell script. The difference is that batch is started immediately (if there is room in the queue). Thus, the command

    batch < run_queries.sh
will start the job as soon as possible. As with at, the results of the command will be E-mailed unless you redirect output.

Creating an advanced shell script

If you want to go beyond basic shell scripting and create advanced applications, this section is for you. To really create mission-critical applications, there are many additional things you can do, like adding control arguments, enhancing your error handling and sending E-mail. Here is a summary of the different parts of an advanced shell script for Informix applications:

1. Shell identification. This line describes which shell language is used for this particular script. In Listing 2, this is represented by the line "#!/bin/sh".

2. Descriptive comments. Comments help identify the shell script, what it does, and when it was modified. They are certainly not required but greatly enhance the readability of the script. In Listing 2, these are represented by lines beginning with the "#" character.

3. Shell functions. As with other programming languages, shell scripts can have functions. These are common commands that might be used one or more times in the shell script. Again, shell functions are not required. Note that in shell scripts, the shell functions must occur before they are called in the shell script. The functions in Listing 2 are Print_usage and Error_handler.

4. Mainline. The mainline portion of a shell script is what actually does the processing. The first commands that are not in shell functions are the first that are executed. In Listing 2, this is the line "ME=run_commands". This can also be done by using the command "ME=`basename $0`", which uses the Unix basename command to extract the name of the current shell script.
 

A. Control argument processing. Your function can include control arguments just like any other Unix command. This is not necessary but enhances the functionality of the shell script. In Listing 2, this includes the line starting at the "while getopt" and continues to the "done" statement.

B. Running commands. The next part of the script is the what does the work. The shell script in Listing 2 runs an sql file (create_rpt_table.sql), an executable Informix-4GL program (edit_data.4ge), two Ace reports (run_rpt1.arc and run_rpt2.arc), and a shell script (do_cleanup.sh).

C. Exiting and reporting status. The last few lines in a shell script are often used to report the success of the operations. Notice how we send mail to the appropriate people and exit with a status of zero (0).

Listing 2. How to create a functional shell script.
#!/bin/sh

# 

# NAME: run_commands.sh

# PURPOSE: Run a series of Informix database commands

# DETAIL: This script is used to process the processes associated with this articule

# MODIFICATION HISTORY:

# AUTHOR DATE COMMENTS

# Ron M. Flannery 8/2/97 Created initial shell script.

# Ron M. Flannery 8/3/97 Added control arguments.

#

Print_usage()

{

        echo "\nUSAGE: $ME [-m] [-o outfile] "

}

Error_handler()

{

MSG="$*" # $* is what was passed to Error_handler

echo "\n*** $ME: FATAL ERROR OCCCURED: $MSG "

        echo " PROCESS ABORTED ****"

# the next line sends E-Mail to everyone in the $MAIL_LIST variable

echo "FATAL ERROR in $ME: $MSG" | mail -s "ERROR IN $ME" $MAIL_LIST

exit 1

}

# BEGIN MAIN FLOW

# set some default values

ME=`basename $0`

MONTHLY=              # default for monthly report option. Can be over-ridden when

                      # running this script with the -m argument on the Unix command line.

OUTFILE=$ME.out       # output file for the command. Can be over-ridden when running

                      # this script with the -o argument on the Unix command line.

MAIL_LIST="ron benny" # these people receive E-Mail about the status of this script

while getopts mo: o   # getopts reads command line used to run this script and

                      # places values in $o env variable.

                      # the "mo:" says to allow -m as an argument and "-o file" (colon)

                      # see the Print_usage function above for a description of allowed syntax.

do

        case $o in

           m)

                MONTHLY=true

                ;;

           o)

                OUTFILE=$OPTARG

                ;;

           *)

                Print_usage

                exit 1

                ;;

        esac # signals end of the case statement

done         # signals end of the do loop



shift `expr $OPTIND - 1` # this clears any control arguments from command line

                         # you can now use $1 .. $n to access additional arguments.

                         # run the commands to create the reports 



dbaccess stores create_rpt_table.sql >> $OUTFILE # run sql file
create_rpt_table

if [ $? != 0 ]

then

        Error_handler "Running create_rpt_table.sql"

Fi



edit_data.4ge >> $OUTFILE # edit data in the new report table; placeresults in $OUTFILE

if [ $? != 0 ]

then

        Error_handler "Running create_rpt_table"

Fi



# run ace reports

sacego run_rpt1.arc

if [ $? != 0 ]

then

        Error_handler "Running run_rpt1.arc"

fi



sacego run_rpt2.arc

if [ $? != 0 ]

then

        Error_handler "Running run_rpt2.arc"

fi



if [ $MONTHLY ] # if the $MONTHLY variable is set

then

        run_monthly.arc

if [ $? != 0 ]

then

        Error_handler "Running run_monthly.arc"

fi

fi



do_cleanup.sh         # shell script to cleanup files that were created

echo "$ME COMPLETED SUCCESSFULLY at `date`" | mail -s "$ME COMLETED"

$MAIL_LIST

exit 0
Summary

Background processing and shell scripts can be used to create complex or simple applications. A shell script allows you to create a stream of commands that are executed as a controlled process, properly handling and reporting errors. Background processing automates commands or shell scripts that might otherwise be run manually. Proper use of background processing and shell scripts can greatly enhance any Informix environment.

______________________________________________________________________

By Ron M. Flannery, rflanner@speedlink.net
http://www.speedlink.net/~rflanner
Contributing author to "Informix Unleashed!" by SAMs
Lead author of upcoming book, "Special Edition, Using Informix" by Que Publishing.


WAIUG Corporate Membership

Corporate membership is available to companies who wish to participate in user group activities. The benefits of corporate membership include:

All the benefits of individual membership for up to 12 individuals, at a reduced cost. (Additional members may be added if needed at the individual membership fee.)

One designated point of contact from the corporation can add and delete individual members. The membership stays with the company, not the individual, should an individual member leave the company.

Company purchase orders will be accepted for user group activities.

The corporate membership fee is $200.00. This allows a company sign up to 12 individuals and be invoiced for the membership fee. Members will receive four newsletters and all membership announcements and mailings.

Many thanks for the support of our current corporate members:

  • HQ Defense Courier Services Interealty Corporation
  • London Fog Industries Marriott International
  • National Association of Securities Dealers ProLink Services L.L.C.
  • Reynolds Metals Corporation Sallie Mae
  • United Communication Systems Upgrade Corporation of America
  • U.S. Order Vector Research, Incorporated
WAIUG Sponsorship

The user group has been supported by many companies over the past years. The major financial sponsors of the user group have been:

  • Advanced DataTools Corporation
  • Summit Data Group
  • Business Systems Support Group, Inc.
  • Informix Software, Inc.
  • Pure Software, Inc. Compuware Corporation
The options listed below are available for companies who would like to participate in our activities. Please contact Nick Nobbe, Program/Sponsorship Director, at 202-707-0548, or Lester Knutsen, President, 703-256-0267, for more information about sponsorship opportunities. Presentation at Meetings - We plan on one presentation per meeting from vendors that have products that work with Informix.

• Newsletter Sponsorship - The newsletter is produced quarterly. Each mailing goes to over 900 users in the Washington area. Companies sponsoring the newsletter may place a one page ad. This is a great way to announce a new product or job opening to 900 Informix users.

• Local Forums - We have held three one-day Forums for our members, offering numerous seminar sessions and an exhibit hall with 10-14 vendors demonstrating products which work with Informix. These events have been attended by over 200 people, and have been a very exiting way to share new developments related to Informix database software. Exhibitors have found this to be a very worthwhile event targeted at Informix users.
 


This newsletter is published by the Washington Area Informix User Group
Lester Knutsen, President/Editor
Washington Area Informix User Group
4216 Evergreen Lane, Suite 136, Annandale, VA 22003
Phone: 703-256-0267
lester@advancedatatools.com