Washington Area Informix Users Group

Home

About Us

Upcoming Events

Current Newsletter

Newsletter Archive

Presentations

Become a Member

Sponsors

Links

October 1998 Newsletter

Volume 8, No. 4


Highlights of This Issue

Informix Data Warehouse Seminar
The Unofficial Informix FAQ by June Tong
 

Next Meeting - December 2, 1998

Date and Time: 
December 2, 1998, 9:00 a.m. to 12:00 noon

Location:
Informix Software Corporation
8065 Leesburg Pike, Suite 600, Vienna, VA 22182

The Future of 4GL - Dynamic 4GL

Find out how Dynamic 4GL allows businesses built on INFORMIX-4GL to retain and extend their existing 4GL applications, while saving millions of dollars in re-engineering efforts and years in time to market. Jed Corman, Informix Systems Engineer from the Vienna office, will present and demo Dynamic 4GL and delve into technical detail regarding the Dynamic 4GL environment and extensions. The meeting is open to everyone. Please RSVP to 703-256-0267, ext. 4.

Business Objects - CEIS-Health Affairs: A Case Study

Overview of Goals/Mission of CEIS program

  • Discussion of Production vs. Ad-Hoc Reporting Needs
  • Integration of Informix and Business Objects technologies to enhance goals/objectives
  • Discussion of future enhancements to the system and rollout plans for the future
  • Brief demo of the Metacube OLAP access pack and BOA reporting/analysis capabilities
Washington Area Informix User Group - Training Day

Wednesday, March 24, 1999, 9:00 am - 4:00 pm

University of MD, College Park, AV Williams Engineering Bldg.

A one day technical training event for WAIUG members at the University of Maryland during the March spring break with six half-day sessions is currently planned. These are technical how-to sessions. The cost will be $10 for members and includes lunch. Advanced registration is required. Our last Training Day (1997) had over 100 members participating and included the sessions on the following topics:

  • Enterprise Data Security;
  • Migrating to INFORMIX-OnLine 7.X From SE or 5.X;
  • Web Pages From Your Informix Database;
  • Query Optimization;
  • Developing Applications with the Web DataBlade;
  • Benefits of Data Warehousing.
Depending on the space available at the University of Maryland, we have openings for a few more sessions. If you would be interested in conducting a half-day techinical session at this event, please contact Lester Knutsen at 703-256-0267.

Newsletter Sponsorship

We would like to thank the following companies for sponsoring this issue:

  • Advanced DataTools Corporation
  • Business Objects
Election of New Officers to the Board of Directors

At out last meeting, on September 16, 1998, we held our annual elections to the user group board of directors. These are volunteer positions and require a lot of hard work to keep our user group operating smoothly. I would like to thank the following folks for volunteering for this job.

  • President - Lester Knutsen
  • Vice-President - Bob Carts
  • Program Director/Secretary - Nick Nobbe
  • Treasurer - James Edmiston
  • Membership Director - John Petruzzi
  • Members at large - Sam Hazelett, Peter Wages, Charles Kaltwasser
  • Informix Reperesentatives - Mike Tinius, Sherryl Dorch 

  •  
Last Meeting - September 16, 1998

 

This meeting featured two presentations that members of our user group are giving at the Informix Worldwide conference plus news from the conference. Copies of these presentations will be available on our web site soon.
 
 

  • Transitioning to Informix Dynamic Server on NT from UNIX  - By Mike Tinius, Principal Systems Engineer, Informix Software, Inc.
    • This session focused on Informix Dynamic Server (IDS) on NT, and compared IDS on NT with IDS on UNIX in three major areas: architecture, product installation, monitoring and tuning. Attendees familiar with IDS on UNIX walked away with valuable and insightful knowledge on how to transition to Informix Dynamic Server on NT. Explored performance monitoring and tuning strategies. 
  • Using CLI to Connect to Informix Databases through ODBC  - By James Edmiston, Senior Software Engineer, Science Applications International Corporation.
    • This presentation discussed how to enable third-party tools to access Informix databases through ODBC by installing and configuring Informix CLI for Microsoft Windows. This is important because there are many products on the market for developing client applications that utilize the ODBC standard. Also explored the additional Informix connectivity products and components packaged in the Informix Client Software Developers Kit (SDK). 

Informix Seminar on Data Warehouse

November 4, 1998

The McLean Hilton, 7920 Jones Branch Drive, in McLean. 

Delivering the Data Warehouse: 

Packaged Solutions for Rapid Implementation
 

  • Informix presents an industry luminary's vision on the future of data warehousing and how it will deliver increasing business value through powerful analytic applications.

  •  
  • How to quickly and easily implement a data warehousing solution, and accelerate ROI with a new, flexible, integrated solution-suite from Informix.

  •  
  • How to avoid pitfalls and enjoy a smooth data warehouse implementation, as presented by knowledgeable and experienced Informix professionals.

  •  
  • What companies on the forefront of data warehousing have to say about their experiences and success with their data warehouse implementations, the problems they solved, and the benefits they gained. 

  •  
Audience:

 
  • Executives and management from IT and business units who want to understand more about data warehousing and the value it can deliver.

  •  
  • Those interested in learning more about Informix's new solution-suite that delivers a powerful data warehousing solution quickly and cost effectively. 

  •  
Agenda:

 

8:00 - 9:00 Registration and continental breakfast

9:00 - 12:00 The Future of Data Warehousing 

Bill Inmon - the "Father of Data Warehousing" 

Delivering on the Future: Packaged Data Warehouse Solutions

Informix Data Warehousing Business Unit

Delivering on the Future: Rapid Data Warehouse Implementations

Informix Data Warehousing Business Unit

Case Study

Bureau of Land Management, CKE Restaurants,

12:00 - 1:30 Lunch
 

Offer:
 

Attendees will receive a copy of the book, Informix Data Warehousing: Best Practices.

RSVP to 703-8472900



 

The Unofficial Informix FAQ

by June Tong

Frequently Asked Questions about Informix

(In case you're wondering, the official Informix FAQ is at http://www.iiug.org/techinfo/faq/faq_top.html, but since they didn't include many of these questions, I decided to publish my own.)

General

  • Is product x (or version y of product x) still supported? Are there plans to stop supporting it?

  • See the Informix Product Lifecycle document for descriptions of the different support categories and an Excel- or Acrobat- format list of products.

  • What patches do I need for platform x?

  • The release notes (in $INFORMIXDIR/release) contain a list of patches which have been determined by Informix Product Development to be necessary for running the product. This list is created prior to release of the product, so any new patches identified after the release of the product (e.g. as a result of specific customer problems) will not be listed in the release notes, but will probably be listed in the release notes of the next version. Therefore, if you are looking for "the definitive list of patches that should be installed", there is no such thing. Informix generally recommends that the customer consult their OS vendor for a list of recommended patches.

    You may be able to find more helpful information at the following vendor sites:

  • How can I tell what OS patches are already installed?

  • This varies by Operating System.

    • Solaris: Run "/bin/showrev -p" or see /var/sadm/patch
    • HP: Run "/usr/sbin/swlist" as root
  • What documentation does Informix have on Year 2000 compliance?

  • Which versions support DBCENTURY?
    Do I need to upgrade my engine and/or my front-end for DBCENTURY?

    See Informix's webpages Year 2000 and Informix Products and Year 2000 Support in Client API Products for information on Year 2000 compliance.

    Note that all Informix products are "Year 2000-compliant" in that they have always stored dates with the complete 4-digit century (actually, as an integer representing the number of days from December 31, 1899). What DBCENTURY adds is the ability to have 2-digit years expanded automatically to the appropriate century, as determined by the user.

    2-digit years are expanded to 4-digits when the string is converted to a DATE type. If you are using 4GL, for example, the statement:

     LET date_variable = "9/9/99"
    causes the expansion to be done by 4GL before assigning the value to date_variable. However, the statement:
     INSERT INTO orders (order_date) VALUES ("9/9/99")
    sends the string to the engine and causes the expansion to be done by the engine. Whether you need to recompile your 4GL applications with a DBCENTURY-aware version of 4GL, or whether it is sufficient to upgrade the engine and set DBCENTURY in the engine when initializing, will depend on which side is doing the string-to-date conversion.

Generic SQL

  • How can I do case-insensitive searches, e.g. search for "Smith", "smith", "SMITH", or even "sMiTh" ?

  • If you are not using Illustra or Universal Server, this is not an easy thing to do. If the data is stored in mixed-case, your options are:

    1. Break each word down to the individual letters, and search for the regular expression using both upper- and lower-case letters, e.g. to search for lname equal to "Smith":
    2. SELECT * FROM customer
      WHERE lname MATCHES "[Ss][Mm][Ii][Tt][Hh]";
      The IIUG Software Repository, 4GL (noncaseqry) contains a 4GL function which will create this clause for you.
       
       
    3. Using an UPPER() function (see next question below), convert both the column and the filter value to uppercase, e.g.
    4. SELECT * FROM customer
      WHERE UPPER(lname) = "SMITH";
    5. Create a dummy column to store an all-uppercase copy of the value, e.g.
    6. ALTER TABLE customer ADD (uc_lname CHAR(20));
      UPDATE customer SET uc_lname = UPPER(lname);
      SELECT * FROM customer
      WHERE uc_lname = "SMITH";
    Option 3 is probably the best, because it is the only one which will make use of an index on lname. Option 1 will do a sequential scan (or possibly a scan of the entire index) to compare the value of lname to the regular expression. Option 2 will also do a sequential scan or scan of the index, perform UPPER() on each lname value, and compare the result to "SMITH". These will be slow.

    If you are using Universal Server or Illustra, you can create a user-defined function, e.g. UPPER(), and create an index on UPPER(lname). This would allow you to store lname in mixed-case, and still use the index to search for "Smith". A query such as:

    SELECT * FROM customer
    WHERE UPPER(lname) = "SMITH";
    would use an index on UPPER(lname).
  • How do I convert a character string to uppercase/lowercase?

  • Informix-SQL and 4GL have built-in functions UPSHIFT() and DOWNSHIFT() which convert a character string to uppercase/lowercase.

    Version 7.3 engines now have built-in functions UPPER() and LOWER() to convert character strings to uppercase/lowercase.

    For pre-7.3 engines, see the IIUG Software Repository, Misc (upper_spl, upshift_spl) for sample stored procedures to convert character strings to uppercase. These can easily be modified to convert strings to lowercase.

  • Are there functional equivalents for Oracle's DECODE, NVL, TO_DATE, TO_CHAR, etc. functions?

  • Version 7.3 engines provide these functions for Oracle compatibility. The functions work exactly as the Oracle functions do, except for the DATE functions, due to differences in the date types.

    For pre-7.3 engines, check the IIUG Software Repository, SQL (orclproc).

  • How can I find the location of a character within a string? Say I have the string "FOO-BAR", how can I find the "-" in the string?
  • CREATE PROCEDURE Cut (string VARCHAR(255), delimiter CHAR(1))
      RETURNING VARCHAR(255);
      DEFINE i INTEGER;
      DEFINE loc INTEGER;
      DEFINE res VARCHAR(255);
    
      LET loc = FindStr(string, delimiter);
      IF loc = 0 THEN
        RETURN string;
      END IF;
    
      LET res = '';
    
      FOR i = 1 TO loc - 1
        LET res = res || string[1,1];
        LET string = string[2,255];
      END FOR;
    
      RETURN res;
    END PROCEDURE;
    
    
    CREATE PROCEDURE FindStr(str VARCHAR(255), ch CHAR(1))
    
      RETURNING INTEGER;
       DEFINE i INTEGER;
    
        FOR i = 1 TO length(str)
            IF str[1,1] = ch THEN
                RETURN i;
            END IF;
            LET str = str[2,255];
        END FOR;
    
        RETURN 0;
    
    END PROCEDURE;
  • I have a CHAR column that contains numeric data. When I do a query WHERE char_col = "14", it comes back very fast, but when I query WHERE char_col = 14, it takes a really long time. I have an index on char_col. Why is it so slow?

  • When I upgraded from OnLine 4.x to 5.0 (or later), performance on queries WHERE char_col = 14 slowed to a crawl. Why?

    Contrary to what you might expect, a query "WHERE char_col = 14" does not first convert 14 to the character string "14" and then search for that. (I believe that version 4.x did that, and it was the fix to that bug - because it is a bug - that caused performance to slow down on upgrades from 4.x to anything else.) Instead, each value of char_col is converted from CHAR to INT, and then compared with 14. Why?

    Remember that an index on a character column is sorted based on the character values, starting from the first character of the field. Therefore, in a char(5) column, the following values are sorted in this order: " 14", " 21", "0014", "099", "14", "14.0". Four of these values are equal to 14, and should be returned by a query WHERE column = 14. If, however, the value 14 was first converted to CHAR, resulting in the value "14", only the fifth value above would be returned. Since the values equating to 14 can be spread throughout the index, the index cannot be used to find them. This query will use a sequential scan, or in the best case, an index scan, converting every value of char_col to INT before testing. For this reason, this use of implicit type conversion should be avoided. If the data is in character format, it should be compared with a character string, e.g. "14".

  • Why do I get an error when I try to SELECT NULL from a table?

  • NULL does not have a type, and therefore cannot be treated as a constant in a SELECT statement. The way to work around this is to create a stored procedure which returns a null of the datatype you want.

  • I inserted the value 1.69999971 into a FLOAT column, but when I select the data, I get 1.7. Is this a bug?

  • Both FLOAT and SMALLFLOAT are very limited in the precision they support, which depends on how the specific computer internally stores floating point numbers. If the value contains more digits than the floating-point representation on the computer can support, the least-significant digits are treated as zeros. The erroneous value displayed is usually not actually an error in the display (although there have been bugs entered against ISQL and DB-Access for not displaying enough decimal places in the Query-Language option), but in the way it is stored. This is documented most thoroughly in the manual Informix Guide to SQL: Tutorial, version 7.1, p. 9-10, and also in the Informix Guide to SQL: Reference, version 7.1, p. 3-15.

  • Why do I get an error when I add a month to certain dates?

  • The following SQL query demonstrates a problem in date/datetime arithmetic:

    SELECT order_date + 1 UNITS MONTH FROM orders
    This returns error -1267 "The result of a datetime computation is out of range" because one of the values of order_date is May 31, 1994. Adding a month to this would result in June 31, 1994, which is an invalid date. This is in accordance with ANSI standards.

    Couldn't they have just returned June 30, 1994, you may ask. Yes, they could have, but you probably didn't really want to do that anyway. What happens when you add a month to June 30, 1994? You get July 30, 1994, which is not the same as if you added two months to May 31, 1994.

    What you probably want is a procedure based on business rules; for example, a procedure that finds the last day of a month.


OnLine

  • How can I re-organize my table to consolidate my extents?

  • How can I move my table from one dbspace to another?
    ALTER FRAGMENT ON TABLE table1 INIT IN targetdbs;
  • I have two users trying to update rows in the same table. Each user is going after a distinct set of rows: for example, user1 updates where customer_num = 10, user2 updates where customer_num = 20; there is no possibility that the users are attempting to update the same row at the same time. But they are getting locking errors. Is this Adjacent Key Locking? I have tried setting Isolation Level to Dirty Read, but it still doesn't work.

  • First of all, it is important to realize that Dirty Read affects only queries, not updates and deletes. It is not possible to do a "Dirty Update"; that is, you cannot choose to ignore a locked row when doing an update. To do that, you would have to declare a simple cursor using Dirty Read, and then update individual rows based on the primary key, ignoring any lock errors, like so:

     WHENEVER ERROR CONTINUE  -- because the UPDATE will fail on a lock
     SET ISOLATION DIRTY READ
     DECLARE c1 CURSOR FOR SELECT unique-key WHERE { whatever your criteria are }
     FOREACH c1 INTO x
     UPDATE table WHERE unique-key = x
     END FOREACH
    The only isolation level that has an impact on updates and deletes is Repeatable Read (more on this later); otherwise isolation level on updates and deletes is Committed Read.

    Next, think about how you are accessing the rows. Keep in mind that if the row is locked, the engine cannot read it, even just to determine whether it meets your criteria or not. Therefore, if any row which your session needs to read is locked, your query will fail with a lock error. The way to avoid a problem is to use an index to go directly to the row(s) you need. If your query uses a sequential scan, it will eventually fail if even one row in the table is locked, regardless of what that one row is. Also, be aware that even if your query uses an index, if your index cannot isolate the row you want, you will have to scan the rows returned by the index, which will fail if any of these rows are locked. It is very important to make sure that your queries use the best access path, which usually means an index which takes you directly to your row, and not requiring reading any other rows.

    Ex.1. User1 updates rows where customer_num = 10. User2 attempts to update rows where customer_num = 20. Table has no indexes, therefore user2 does a sequential scan, and fails when he reaches customer_num 10, which is locked by user1. Although user1's row contains customer_num 10, not 20, the engine cannot verify that the row does not meet the criteria, because it is locked. If there were an index on customer_num, user2 could use the index to go directly to customer_num 20, never having to read the row locked by user1.

    Ex.2. User1 updates rows for city="Menlo Park", state="CA". User2 attempts to update rows for city="San Francisco", state="CA". State is indexed. User2 reads the index on state, which shows three rows with state="CA". User2 must then read the data rows pointed to by this index, and gets an error when he reaches the Menlo Park row locked by User1. If the index were on (state, city), user2 could go directly to the "San Francisco" row, and never try to read the row locked by user1.

    So, how does Repeatable Read affect updates? The definition of Repeatable Read is that if the user were to re-run the same query/update/delete within that transaction, the same results would be returned. That is, data cannot be changed in any of the rows that were selected, nor can any more rows be added that would fit the criteria of the query/update/delete. For example, if I update all rows where customer_num = 10, then until I commit the transaction, not only can you not update any of those rows, but you cannot insert any new rows with customer_num = 10. This means that the adjacent index item must be locked to prevent you inserting this new row; this is very similar to Adjacent Key Locking (see the below-mentioned TechNotes article for more info).

    For a full description of Adjacent Key Locking and Key Value Locking, see TechNotes 1994, Volume 4, Issue 3&4: B+ tree Item Locking in Informix-OnLine 5.x and Informix-OnLine Dynamic Server.

  • If I declare a cursor using an isolation higher than dirty read, can I get it to skip locked rows? It seems like when I hit a lock and get an error, the next FETCH will return the same error.

  • When you get a lock error on a FETCH, your next FETCH will re-attempt to fetch that same row, hoping that the row has been released. There is no way to change this behavior.

    What you can do is use two cursors, with different isolation levels, to skip locked rows. Here's a 4GL code fragment I wrote which will do that:

     DEFINE c_num INT
     DEFINE c_rec RECORD LIKE customer.*
    
     WHENEVER ERROR CONTINUE
     SET ISOLATION TO DIRTY READ
    
     DECLARE c1 CURSOR FOR SELECT customer_num FROM customer
     FOREACH c1 INTO c_num
    
       SET ISOLATION TO COMMITTED READ
    
       DECLARE c2 CURSOR FOR SELECT * FROM customer WHERE customer_num = c_num
       OPEN c2
       FETCH c2 INTO c_rec.*
    
       IF sqlca.sqlcode = 0 THEN
         DISPLAY c_rec.customer_num
       ELSE
         DISPLAY "error ", sqlca.sqlcode, ": skipping to next record"
       END IF
    
       SET ISOLATION TO DIRTY READ
     END FOREACH
  • When I try to initialize my OnLine engine, I get:
  • 08:50:57 mt_shm_init: can't create virtual segment
    08:50:57 shmat: [EINVAL][22]: shared memory base address illegal
    08:50:57 using 0x80600000, needs 0x080800000
    If I change the SHMBASE to the needed value, I get the same message, only with different values. What's wrong?

    This error message is usually returned when some kernel parameter (usually SHMMAX) is not set high enough either to allocate the needed shared memory in a single shared memory segment, or to allocate the memory at all (usually the single segment). Try increasing SHMMAX and re-build your kernel.

  • What is the maximum size of a chunk? What is the maximum offset+size for a chunk?

  • The theoretical maximum size of a chunk is 1,048,576 pages, regardless of whether you are on a 64-bit OS. This is because of the page numbering convention used by OnLine: the page address is represented in hex as CCCPPPPP -- 3 hex digits for chunk number, 5 hex digits for logical page number in the chunk. So the maximum page number in a chunk is 0xFFFFF, or 1048575 (since the first page number is 0, this equates to a maximum of 1048576 pages). This means that if your port uses a 2KB pagesize, the maximum size of a chunk is 2GB. If your port uses a 4KB pagesize, the maximum size of a chunk is 4GB. On some OS's, lseek may fail if a value over 2GB is passed to it; this will limit your chunk size to 2GB. For this reason also, the offset+size of the chunk may be limited to 2GB, meaning that you cannot create a 2GB chunk with a .5GB offset, or any other combination that would add up to more than 2GB.

    The maximum supported chunk size is 2 GB. This is documented in the Informix-OnLine Dynamic Server Administrator's Guide. For Version 7.1, it is on page 14-5.

  • I deleted a lot of rows from my table. Why is it still taking up so much space?

  • When your table grows, new extents are allocated to that table from the free space in the dbspace. When rows are deleted, these pages are not released. Even if all the pages in an extent are empty, the extent remains allocated to the table. The only way to reclaim the extent is to re-build the table, possibly by using ALTER TABLE or by exporting it.

  • How can I use/save an HTML page greater than 32k using the Web DataBlade? I am getting the error "Results exceeded maxsize (32768)".

  • Increase (or set) MI_WEBMAXHTMLSIZE in your web.cnf file.

    The Webdriver allocates 8k of memory for storing the results of a app-page request. It then dynamically allocates more memory as needed, up to the value of MI_WEBMAXHTMLSIZE.

    MI_WEBMAXHTMLSIZE defaults to 32K and can be modified by adding it to the web.cnf file. The syntax is:

    MI_WEBMAXHTMLSIZE n
    where n is the maximum number of BYTES that you want to allocate to a Webdriver thread.
  • How can I limit the number of rows returned to the browser by the Web DataBlade?

  • In some versions of Web DataBlade, there is a MAXROWS attribute for the MISQL tag to limit the number of rows returned. This is documented in the Informix Web DataBlade User's Guide. In later versions, this has been superceded by WINSIZE. Check the release notes for the Web DataBlade.


Archiving

  • Does ontape support archiving (or logical log backup) to disk?

  • Not only is this question frequently asked, it is frequently answered incorrectly.

    YES - ontape supports disk files. If you need documentation, see page 12-6 in the INFORMIX-OnLine Dynamic Server, Archive and Backup Guide, Version 7.1:

    A user with one tape device might want to redirect a logical-log backup to disk file while the tape device is in use for an archive.
    This has been issued as Tech Alert 6125. Tell this to any Informix person who tells you this is not supported.
  • What is the maximum tape size that can be used with ontape?

  • The maximum TAPESIZE value is 2147483647 (2 Gig), but TAPESIZE is specified in Kbytes, making the maximum tape size 2 TB (Terabytes).

    Some platforms limit to 2GB the amount that can be written ( using write() ) on a single open(). Since ontape uses open() and write(), this will limit the tape size to 2GB; this is an OS, not Informix, limitation.

    If you are using a Unix file for your tape device, however, you may be limited to 2GB. This is bug 76256, which is fixed in 7.24 and later versions.

  • Can I take an archive from OnLine version x and restore it on OnLine version y?

  • No. It is only supported to restore an archive taken with the same version of OnLine.

  • Can I use onunload to unload a table from OnLine version x and onload it on OnLine version y?

  • No. Onload only works on onunload files taken from the same version of OnLine. See the Informix-OnLine Dynamic Server Administrator's Guide, in the chapter OnLine Utilities, for onload/onunload:

    Constraints That Affect onload and onunload
    The original database and the target database must be from the same release of OnLine. You cannot use onunload/onload to move data from one release level to another.
  • Can I use OnBar's point-in-time recovery to recover a dbspace up to a specific time?

  • No. Although OnBar can be used to restore a specific dbspace, and also provides point-in-time recovery, they cannot be used together to bring a dbspace up to a point-in-time earlier than the rest of the system. A warm restore of a dbspace must include a logical recovery which brings the dbspace in sync with the rest of the instance.


ESQL/C

  • Can I compile ESQL/C code for use with C++ ?

  • There is no supported way to use ESQL/C with C++. However, it is possible to combine ESQL/C code with C++ if:

    • You break out the ESQL/C code from the C++ code; they must be in separate modules.
    • You do not use structs containing function members in the code that ESQL/C is aware of: no member functions or classes.
    • You do not use C++ comments in the code processed by ESQL/C.
    See the IIUG Software Repository, ESQL/C (esql_c++) for detailed directions.

Other Resources

FAQ's:

Manuals & Release Notes: Other good places to search:

Comments, questions, corrections, suggestions?



 


This newsletter is published by the Washington Area Informix User Group
Lester Knutsen, President/Editor
Washington Area Informix User Group
4216 Evergreen Lane, Suite 126, Annandale, VA 22003
Phone: 703-256-0267

lester@advancedatatools.com