Washington Area Informix Users Group


About Us

Upcoming Events

Current Newsletter

Newsletter Archive


Become a Member



October 1997 Newsletter

Volume 7, No. 4

Highlights of This Issue

Washington Informix Users Group - Next Meeting - December 3, 1997

Washington Informix / PeopleSoft SIG - Next Meeting - November 18

Michigan Informix Users Group - Next Meeting

"Ask the Experts" - Transcript from the 1997 Informix Worldwide User Conference IIUG Technical Track

Book Discounts Available to Informix User Group Members

ARCUNLOAD a utility to extract a table from a level-0 archive (tbtape/ontape)

Membership and Sponsorship Information

The User Group Technical Conference
Solutions for Informix Users - Forum 98

The Washington Area Informix User Group would like to invite you to attend our fourth one day users technical conference and forum. This will be an exciting event that includes technical presentations, practical training sessions, exhibits, demos, a public domain software diskette, and a chance to meet and network with other Informix developers, programmers, DBA’s, and users.

Location: Fairview Park Marriott, 3111 Fairview Park Drive, Falls Church VA
Date: Friday, February 20, 1998, 8:00 am to 5:00 pm

At the last forum we had over 230 participants, 16 speakers and 14 exhibitors. Participants said they learned more practical information in the forum than in any other event. Invitations for this forum are being sent to over 3,000 Informix users. We are planning the following sessions and exhibits.

Keynote: Dr. Michael R. Stonebraker

Dr. Michael R. Stonebraker is Chief Technology Officer of Informix Software and was co-founder and chief technology officer at Illustra. A noted expert in database management systems, operating systems, and expert systems, Dr. Stonebraker is Professor Emeritus of Computer Science at the University of California at Berkeley. Illustra represented the commercialization of Dr. Stonebraker's POSTGRES research project on the UC Berkeley campus. Dr. Stonebraker founded Ingres Corporation in 1980. Dr. Stonebraker recently authored the book entitled "Object-Relational DBMSs: The Next Great Wave."

Planned Session Topics:

  • Data Warehousing
  • Database Performance Tuning
  • Database Security and Administration
  • INFORMIX DSA 7.3 Features
  • Migrating to OnLine 7.X from SE or 5.x
  • New Options for 4GL
  • Optimizing SQL Programming
  • Web Database Development
Planned Exhibits:
  • 4GL Upgrade Options
  • Client-Server Tools
  • Development and Consulting Partners
  • Database Administration Tools
  • Graphical Development Products
  • New INFORMIX Products
  • SQL and Database Training
  • Web Development Tools

Forum 98 Registration

Participation is open to everyone. There is a $30 registration fee. A final schedule and reminder will be faxed or mailed to all registrants. Please contact John Petruzzi, Membership Director, to register at 703-490-4598, or send in the form on the last page.


One room will be set-up as an exhibit hall with places for 12-14 exhibitors. Four exhibitors have already signed up. If you would be interested in exhibiting your products please contact Lester Knutsen at 703-256-0267.


The topics of interest to our users are practical "how to sessions". We are looking for a few very good technical sessions on developing, administering, and using Informix databases and client-server tools. If you are interested please send us an proposal at waiug@iiug.org or contact Lester Knutsen at 703-256-0267.

Newsletter Sponsorship
We would like to thank the following companies for sponsoring this issue:
Advanced DataTools Corporation
ChainLink Networking Solutions
Summit Data Group

This newsletter is published by the Washington Area Informix User Group.
                    President/Editor: Lester Knutsen 703-256-0267
                    Membership: John Petruzzi 703-490-4598
                    Treasurer/Secretary: Sam Hazelett 703-277-6882
                    Programs/Sponsors: Nick Nobbe 202-707-0548
For more Information: 703-256-0267
Web Page: http://www.iiug.org/~waiug/

Next Meeting Agenda - December 3, 1997
Date and Time: December 3, 1997, 9:00 a.m. to 12:00 noon
Location: Informix Software Corporation, 8065 Leesburg Pike, Suite 600, Vienna, VA 22182
Using Informix Enterprise Replication
by Bob Carts of Science Applications International Corporation

The Informix Enterprise Replication (IER) product became available with ODS 7.22 in January 1997. This presentation describes what IER is and what it can do. A technical description of how IER functions is provided including many tips and gotcha's not covered in the documentation. Finally, features of the next release of IER will be highlighted. The information is especially valuable if you are considering using IER or are interested in data replication.

Year 2000 Impact for Informix Users

This will be a panel discussion on the impact of the year 2000 on Informix database users. The Informix data type DATE stores all four digits of the year. However, some developers are concerned about their applications. We will review the different versions of Informix and availability of the new environment variable DBCENTURY. User group members will present how they are dealing with the year 2000.

Informix / PeopleSoft SIG - November 18

The first Washington Area Informix / PeopleSoft Special Interest Group (SIG) was held on June 18th in Bethesda, MD. This successful seminar hosted 2 performance related presentations. The first by Kevin Fennimore of UCI Consulting ("Indexing Strategies and Query Optimization") and the second by Raj Devireddy, a PeopleSoft consultant ("Performance Tuning for Developers"). After the 2 presentations, the morning consisted of a roundtable discussion. Feedback from the participants was very positive.

The following 11 organizations were represented at the Washington Area SIG in June: Aerotek, Choice Hotels, Circuit City Stores, Geico, Giant Food, Host Marriott Corp., Host Marriott Services Corp., IRS, Marriott International, Piper Marbury LLP, and US Dept. of Veteran Affairs. Four organizations were not able to attend the first SIG, but requested they be included in future meetings.

The next SIG meeting is scheduled for November 18th. This meeting has a full agenda which starts with a "getting to know you" session. Each organization will describe their environment and share their configurations. These presentations will assist the SIG members in becoming familiar with the challenges they face. This session will also provide the necessary baseline information by which meaningful future programs will be formulated and promote group affiliation. This meeting will also include architectural issues surrounding transition from PeopleSoft’s two-tier to three-tier architecture. Finally, reports of the Informix User Conference (7/97) and PeopleSoft User Conference (9/97) will be made. In addition to the Washington Area SIG, there is a National Informix / PeopleSoft SIG which met at the PeopleSoft Annual Users Conference on September 7.

Should you have an interest in more information on the Informix / PeopleSoft SIG, please feel free to contact Nadia Skiscim at (703) 847-3323 or nadias@informix.com.

Michigan Informix Users Group - NEXT MEETING

The next meeting will be on Tuesday, November 18, at 6:15 PM. It will probably be held in Southfield. The likely topic will be the Silverstream development tool, which allows creation of Web-enabled applications that use backend databases.

The MIUG board has suggested that our members begin presenting "how I used Informix" stories. These would be short presentations that would help the members better understand different aspects of how to use Informix. Look for a presentation at the November meeting. These presentations can be simple and short, and reqire no formal preparations (not even slides). Please contact us if you want to present at the next meeting.


The Michigan Informix User Group (MIUG) meets is a non profit organization that was developed to meet the needs of the Michigan Informix Community. We meet bi-monthly (every odd-numbered month) in the metro Detroit area. The meetings are held on the third Tuesday of the month and last two to three hours. See our policies and procedures for complete details.

We are pleased to present the MIUG E-Mail list! The list can be used to exchange information on MIUG, ask technical questions, and get to know other members. To join the MIUG list simply send an E-Mail to majordomo@iiug.org and include "subscribe miug-members" in the text of your message. For information on using majordomo, send an E-Mail to majordomo@iiug.org and include "help" in the text of your message. Note: It is helpful to include a single line with the word "end" on it after your subscribe or info command.

The MIUG meetings generally follow the same format. They start with gathering and networking from 5:45-6:15 PM. The main presentation starts at about 6:20 PM and includes Informix and third party products and case studies. Following the main presentation, we will discuss MIUG issues, give an update on Informix products, and have a round table and QA session. When possible, we will have local MIUG members give brief technical presentations. There will generally be an Informix representative on hand to answer your questions and issues. The meetings will adjourn by 8:30.


All information is available on our Web site at http://www.zenacomp.com/miug/. E-Mail: miug@zenacomp.com
Phone: 248-887-8470
Fax: 248-887-5395
Michigan Informix User Group
43422 West Oaks Drive, Ste. 294
Novi, MI 4837

Transcript from the 1997 Informix Worldwide User Conference
IIUG Technical Track -- "Ask the Experts"
Carlton Doe, IIUG President, Moderator and Transcriber

Well it's taken a little time but the transcript of the "Ask the Experts" session at this year's Informix Worldwide Users Confernce is finally available. This session was part of the technical track sponsored by the IIUG. Participants in this session included:


PH: Dr. Paula Hawthorne, VP of the Tools Group
DT: Don Top, Director of Database Kernel Engineering
KW: Kevin Whitley, Tools Architect
MU: Mike Ubell, Executive Architect, I-US
JF: Jeff Fried, Senior Engineer
JL: Jonathan Leffler, Senior Engineer
BB: Brett Bachman, Product Manager for engines
TH: Troy Hewitt, ATG
CA: Clem Akins, International Support
CD: Carlton Doe (Moderator)

Questions for this panel were submitted by IIUG members and attendees to the 1997 IWUC. As a result, there is a broad range of discussion about Informix's tools and engines, their features, how to implement them more efficiently, as well as future plans. It would be well worth your time to read it.

The transcript is available on the IIUG server at http://www.iiug.org/techinfo/ate_tran.html

Carlton Doe

CD: Welcome to this, the fifth session of the IIUG Technical Track. We call it "Ask the Experts". We have sitting before you the people who either write the code or direct that the code be written in all of the major product groups that Informix has, or can address the marketing and product direction strategies for all the products.

As you can see, we have literally the gambit. We have people such as Troy Hewitt and Clem Akins who represent the real world experience, to Dr. Paula Hawthorne, Vice President of Research and Development for Tools. I have asked them to take just a moment and briefly introduce themselves and to let you know what they would want you to know about them. So, we'll start way down on the end there with Troy, if you wouldn't mind.

TH: I'm Troy Hewitt. I've been with Informix now for about five years. About half of that, two thirds of that was doing consulting work. Now with the ATG Group I do such contracts as like FedEx, Bell South and so forth; some of the larger terabyte size systems. If you have any questions feel free to ask.

CA: I'm Clem Akins. I work for International Technical Support for Informix, and I'm here representing the support arm of the organization. I've been working with Informix products for the last twelve years--in every role from developer to DBA, now to vendor and supporter, so I'll be answering questions along that line. Thank you.

MU: I'm Mike Ubelle, Executive Architect, concentrating mostly on I-US. Prior to that I was Chief Scientist at Illustra. I have been working with databases it seems like most of my life.

DT: Hi, my name's Don Top. I've been with Informix for a little over eleven years. Six of those years doing support, primarily in Menlo Park. Five years ago I moved up to Portland and have been involved with development of the RCM layer; that's the access methods and storage management of all three of our servers.

JF: My name if Jeff Fried. I've been with Informix for seven years. For the first five I worked on query optimization and for the last two I've been a member of the team working on I-US.

JL: I'm Jonathan Lefler. I've been working with Informix products for over eleven years now. I've been working for Informix for six and a half, something like that. Primarily I've been working with 4GL and things like that, but also have worked with the engines and so on, and I'm also doing things at the moment with (unintelligible) Informix which is the perl way of getting at Informix databases.

KW: Hello, my name is Kevin Whitley. I'm an architect in the Tools Division at Informix. I've a relative newcomer to Informix. I've only been here about a year and a half but I've been working in tools in various places including the unfortunate competitor down in Redwood Shores for a bit over ten years. I specifically cover things like NewEra, DataDirector and the DataBlade Developers Kit.

PH: I'm Paula Hawthorne. I'm the Vice President for the Tools Group. I've been with Informix since Illustra was combined with Informix about, almost two years ago now. I was one of the founders of Illustra.

BB: Brett Bachman, General Manager of Enterprise Products. What that's all about is product marketing, pricing, and product development for our web products, the web DataBlade, Universal Web Connect, the Web Integration Server, and some of the new messaging products including the Tibco (unintelligible).

CD: Thank you. When we decided to put this session together, we tried to publicize to as many people as possible asking them to submit technically oriented questions. Not necessarily version specific such as why doesn't version 7.23.UC1 do this but UC2 does not, but just general technically oriented or general product direction questions that these individuals could address. There has been on the web site at the IIUG location, a form for submitting these questions. There was also a link off of the evaluation page at The Source system out here in the Internet Cafes where people could submit questions. What I have up here with me right now is a compilation of those questions which we're going to go through.

The organization of the session is fairly basic. We're going to focus the questions on application tools to begin with. We'll finish up with database engine oriented questions.

The first one's for Jonathan Lefler. It says, what version of the tools, and by this they mean, ISQL, 4GL or ESQL, make use of the DBCENTURY environment variable? We have a large base of Legacy code that must be modified to utilize this variable but our tools don't all seem to recognize it.

JL: ESQL/C and OnLine Dynamic Server version 7.20 was the first set of products that actually introduced that variable. It is going to be retrofitted into 6.10, 4GL and 4.20, 4GL, which are due to be released later this year. I believe it's also being retrofitting into the 5.10 servers, also due to be released later this year. It is automatically available in I-US 9.0 and above, and I understand it's going to be put into the 8.2 release of the XPS servers. I did actually cover this yesterday in the 4GL session.

CD: For Kevin Whitley--is there a way in NewEra, to create a single object that would control multiple objects in different windows? If so, could you give us a brief explanation of how that can be accomplished?

KW: At present, no. {laughter} I talked with a couple of people right before the session and I was hammered with that question a couple of times.

As we evolve NewEra forward, this is one of the most important things we're trying to deal with--encapsulating the various pieces of NewEra into objects and to give you access to their internal components so that the objects will have a proper component hierarchy. So, although the answer is no right now, the answer will be yes, reasonably shortly.

CD: For Jonathan. This was an interesting question, we got quite a chuckle out of this one. Is it possible to write to an Oracle database using Informix 4GL? {laughter}

JL: Ah. Depends on how much effort you want to put into it really. The short answer is no, there is no native way in 4GL to talk direct to an Oracle database. 4GL is written in C and Oracle has a C access layer, so there's nothing to actually stop you writing code that would allow you access to Oracle and then you treating it as ordinary functions in 4GL. So, yes, it could be done. No, it is not straightforward, and I'd also ask why?

CD: Continuing on with the 4GL line, a person asks, will 4GL ever inherit any functionality from NewEra or vice versa?

JL: Actually, it already has been done. 4GL's already inherited a little bit of functionality from NewEra. NewEra has a terminate report statement and that actually is part of 4GL these days. If you have a sufficiently recent version of the product and if you've read the release notes. That's actually something which I would encourage everybody always to do.

Going the other way, no, there's not an awful lot that NewEra is likely to inherit from 4GL simply because NewEra already set out from basically the current version of 4GL, and so that which is in NewEra is already inherited from 4GL and anything that was left behind was left behind for a purpose.

CD: Now that Informix 4GL is off the endangered species list, does this mean that we will have a port of 4GL for each engine platform? And if so, does this include NT?

PH: What we're doing with 4GL is determining on a case by case basis, what functionality to put into it and where we need to do work, versus where our partners need to do work, or whether our partners already have good technology.

We're working with a couple of different companies. I don't know if you have been wanting to mention who they are, Brett? It's in Brett's group. Well, they had ads, if you noticed in your programs their ads are there. Fourgen and 4Js are two companies that have 4GL compilers that will allow you to run 4GL on NT.

As you know, Informix is a partnering company. Where there are partners that are doing very good products we don't see a reason to go in and occupy that same space. We would prefer to be working in those areas where there aren't other people working. We certainly are evaluating these two different products and will have recommendations, etc., for our customers once we finish those evaluations.

CD: The next question is for Kevin -- I have quite a bit of 4GL code that currently resides in a NewEra application server, but I want to migrate to another PC based application tool. Is there any way for JAVA, Power Builder, Visual Basic, or tools of this nature to talk to a NewEra application server?

KW: Yes! The newest version of the new application server will allow you to expose your NewEra classes with OA Automation Interfaces or through JAVA RMI. So, through your JAVA clients or your OA Automation-understanding clients, VB, Power Builder, Visual C++, you can get at your NewEra classes on the app server. That's in NewEra 3.1.

CD: For Jonathan -- Informix SQL and 4GL report writers can use different page dimensions. Can these products be enhanced or can applications be enhanced to dynamically set these page dimensions at run time? Furthermore, can the printer to which these reports be directed also be dynamically set at run time?

JL: Okay. The answer to that is yes, but.... In Ace, you really can't control the report dimensions at run time, full stop. You have to recompile the report, there isn't any way that I've managed to find of cheating that system.

With 4GL there is actually on the IIUG web site, which is http://www.iiug.org, down in the software section, some code which can be used to configure 4GL reports dynamically at run time. There's a bunch of functions which are called and you set page length, set left margins, set right margin, and then inside the body of the report you say¼ Let's see, I can't remember what you say now, it's a long time since I actually wrote code like that. But it has a function which you call which actually goes and cheats behind the scene. However, it does work pretty darn reliably. So that is available to you if you really want to use it.

In terms of setting printers, you need to remember that the port to printer is really just a shorthand for report to pipe and LP or whatever you've set as DBPRINT. So basically if you want to control different printers, either you have to set your environment variable, which is read every time the report is started so if you want to use something fancy like that, you can juggle with the environment. But it's usually simpler just to arrange to do report to pipe and then specify the particular printer that you want to that way. It's probably cleaner than fiddling with the environment.

CD: This question is for Brett. When we were reviewing these questions beforehand, he said that he could speak for an hour and a half on this one alone. I'll try and reign him back in just a little bit here. Would you please explain the Universal Tools Strategy?

BB: All right. So what's the Universal Tools Strategy all about? First, as Paula and the rest of the folks here have been talking about, we are continuing to invest in enhancing and doing bug fixing on our traditional tools--4GL and NewEra. I think the teams have done an excellent job in providing new features to address many of the requests you guys have provided. Along with some of the market demand we've seen in those tool areas, over the last couple years there's been tremendous growth in the proliferation of new tools, and folks like yourselves may already be using. Particularly Microsoft Visual Basic and the whole Visual development tool suite from Microsoft as well as recently JAVA tools like Visual Café Pro from Semantec and others.

So what the Universal Tool Strategy was all about is providing additional components and middleware elements marketed under the brand name, Data Director, which plug into tools like Microsoft Visual Basic, or Visual Cafe Pro from Semantec, and make it much easier, much faster, and more flexible for application developers who tried to use Visual Basic to create client server applications either for a traditional OnLinefamily or under Universal Server, by simply dragging and dropping elements from a data model onto a visual data sig form, or in Semantec Visual Cafe Pro, into one of their pallets, without having to write SQL code or Visual Basic Script. And in the case of Visual Basic, and Kevin and these guys can tell you more about it, to make the applications run much faster than would be possible with pure Visual Basic by using intelligent client side caching and lock management.

So, Data Director is the newest element of our overall tool strategy and we launched this umbrella, which included enhancements for 4GL, NewEra, as well as Data Director, in early April. So that's what the Universal Tool Strategy's all about. Opening up Informix's servers to any tool you want to use, not just 4GL and NewEra.

CD: This question is for Jonathan -- are ESQL/C libraries in version 7.3 thread safe, and if not, which versions are thread safe and which ones are not?

JL: ESQL/C 7.2 is thread safe if your platform has the thread safe thread libraries that it uses. It uses the DCE threads package which is different from the Posix threads package in that DCE threads used a draft. I think it was Draft 9 of what subsequently became the Posix standard threads. It doesn't really matter. The key thing is it's not quite the same, it's close but not identical. 7.2 therefore, requires that particular threads library and you have to arrange to get that from your O/S vendor by whatever means and hopefully people at Informix can tell you exactly what version you need. So 7.2 is thread safe. I'm assuming, this is fact that 9 is also thread save. I don't know about XPS, that's not my specialty. Any earlier versions, 7.1, 6.0, 5 point anything is not thread safe.

{Question from the audience}

JL: The question is, which platforms are the DCE threads available for? I believe that Solaris definitely. I believe HPUX. I believe probably AIX, and I would not be surprised to find that Sequent and one or two other platforms like that. I don't know about NT.

CD: This question is for Brett -- does Informix have any plans to work with object oriented modeling and design tools? If so, in what way?

BB: Actually, we're in the process of rolling out some fairly exciting partnerships with a number of the leading object oriented modeling tool companies. I met with some of the folks from Logic Works, actually last night, and saw a demo of a new tool that they've got and they're going to be promoting to current universal server data customers. It looks really exciting because it supports using their visual modeling tool set capability, virtually all the features of I-US, including inheritance. So that's pretty exciting. So we'll be partnership with them, with CSA, with Infomodelers, guys like Rational and so forth.

There's a member of the product marketing group who has partnership responsibility for all of our analysis and design tool partners, and I believe she's put some information regarding these partnerships up on Informix' external web site. You can also talk to many of them because they have booths here at the show and I'll bet they're demoing these products.

So, I expect we'll have three great analysis and design tools supporting direct creation automatically of I-US schemas generally available before the end of this year. So at least three great choices and hopefully more soon.

CD: For Kevin -- how do I add my own classes, some of these are visual classes, in Data Director for JAVA?

KW: All right. In the current release of Data Director for JAVA, Java DDJ1, 1.1, you'll need to add the classes yourself directly in the source code. There's no direct, visual way of doing that.

When we get to the DDJ2.0 product you'll have better visual support. This is opposed to Visual Cafe. You'll have your nice visual support for adding classes and interfaces and so forth that you get from Semantec, right. But right now you just need to go in and edit the source files yourself and go ahead and add your classes.

CD: This question actually might need to go to Don Top because it is more engine oriented per se, but it is technically, a programming language -- do you expect to make any enhancements to the Stored Procedure Language and if so, what are they?

DT: Actually, I'm going to defer to Mike, here, so go ahead, Mike.

MU: The I-US has enhancements to support the I-US features that were added. So all the I-US features are supported by SPL in version 9.

Currently, I don't believe there's major enhancements planned for SPL. We do have some SQL extensions that would have pushed math into SPL. For example, a case statement and that sort of thing. If someone has specific requests we can see where they are.

JL: Will we be able to character subscripting on character variables or will we need to use "substra" or some other function to do that? That's probably the biggest thing which people are hurting the most for. They can not do a substring on a variable except for fixed, numeric literal substrings.

MU: It's not currently in the works.

JF: However, while you cannot do that, the fact that we have user defined routines, you could construct a user defined routine to add any functionality that you want so that you don't have to wait for us in order to construct it. In general, with SPL, any feature we add, most of the features that constitute changes to SPL are really functional changes to SQL itself, and SPL automatically inherits those.

PH: In the meeting that we had earlier with the Informix User Group Leadership Council, it was clear that there's a little confusion around well, can I do this in ODS? And we answer, yes, you can do it in I-US? Someone says, oh, well, but that's I-US, that's not ODS, but what everyone needs to keep firmly in their heads is that I-US is simply ODS extended. That's what we did.

In spite of the fact that Larry Ellison said we couldn't do it, what we did is we took the ODS code and added into it, the Illustra functionality to give it the extensibility. So I-US is really, honestly a super set of ODS, and in fact the users group was asking why we can't just call it ODS++ and I said, well, I'd ask Brett. {laughter}

So when you hear it is in I-US, that's a yes, for yes it's in ODS. The only reason it hasn't just taken the place of ODS is that it has all this extra code which needs to go through the shake out that you all want it to go through before we just substitute it for ODS, but that's the facts.

Now the other thing that Brett sort of whispered to me when we talking about SPL was, don't forget that with I-US you have JAVA server side functions, and we expect people to move from SPL to writing JAVA server side functions. Isn't that correct?

BB: Yes.

CD: That actually begs the question that's going to come up in the engine side and that has to do with JAVA in the engine, but we'll wait on that for just a moment.

This one would probably be directed toward Jonathan -- what needs to be done besides recompilation in order to migrate ESQL/C applications from an OnLine 5.x to OnLine DSA?

KW: Well, first of all, we have a tool, I've forgotten it's, what it's called. We have a tool that will allow existing 5.0, ESQL/C applications to connect to 7.0. The relay module. So you don't actually have to compile if you don't want to. If you want to you can recompile and then those applications will run just as they, if they had connected to a 5.0 Director they would connect to a 7.0 or 7.x directly.

You can recompile or you don't have to. You can use the relay module. It's not necessary.

JL: The advantage of recompiling is you are getting the direct connect which will give you better performance. If you use a relay module, you're interposing an extra process in the communication chain and the fact you have an extra full process in there means it's going to take longer to do things--basically to get the information back and forth between your program. If you're sending a "delete" statement that’s going to delete a million rows, you won't notice the difference it will take, the time it takes to delete a million rows. If on the other hand you're selecting a million rows, you will notice the difference because you've got a million lots of data to shunt from process one to process two and then back to the application and that process takes time.

CD: This question would be for Paula, what are customer’s options for migration and/or long term support with respect to installed 4GL, NewEra, and Data Director products?

PH: What we're doing is we are very carefully moving the functionality that we have in NewEra, into the Data Director projects by allowing you in future releases of NewEra, to encapsulate the NewEra business logic that's in your applications as ActiveX components or as JAVA Beans and then dropping those into the newer environments. This is a migration strategy that's going to take place over time. It's always important when talking about migrating a customer base as big as the one's that we have on 4GL and NewEra, to keep emphasizing that you don't have to migrate, we will continue to support those two products. When you want to migrate we will have a means for you to do that. Kevin's the architect for the work that we're doing there, so let me make sure I've told the truth here.

KW: Yes. {laughter}

CD: Like he's going to say, no.

For Jonathan, in a 4GL application if the error condition is set to "whenever error continue", is it possible to trap the line number where an error occurred if that error was recognized programmatically rather than by the "whenever error" statement?

JL: Short answer to that question is no. I have in front of me a crib sheet {with the questions} and this question has a follow-up -- if not, when can we expect it? Well, 23rd, 24th century.

It's not something I think that we're likely to introduce. It hasn't been an issue, big issue so far. If it's enough of an issue we will think about addressing it, but we need to be aware that it is a sufficient of an issue.

Generally speaking if you're dealing with code with "whenever error continue", you're also going to handle the error. But I recognize that sometimes you get six errors which you understand and all the other ones which you don't. Then you want to sort of have the old ‘get out of your source’ there could be some value in it, but we need to be convinced that we need to do it.

CD: This question is for Paula, when, or are you planning on upgrading the 4GL and I-SQL tools to be compatible with the 7.x server products?

PH: We actually went through a fairly extensive study with that. Jonathan and a group of people looked at what it will take. Since there is, and I don't have the right technical term, a bail out, a way that you can get from 4GL to the functionality of the 7.x servers through using embedded SQL calls. We felt that there was enough of a flexibility there that we wouldn't have to go through a totally new release of 4GL to do it. Again, this is business case related. If we find that there is enough of a business case from the customers to go back and add in the direct functionality, that is, rather than making an SQL call but having a language construct that exactly replicates the functionality of SQL, then of course we'll look at it.

CD: This one comes from the floor and would be directed toward Jonathan. The person would like to understand a little bit better the difference between "exit report" and "terminate report" and in which versions of 4GL does this syntax exist?

JL: Right. There are two statements which were introduced, I think it was 4.13, but it might have been 4.14, or corresponding versions of a 601, 601. One of those statements is "exit report", the other one is "terminate report". "Terminate report" is like "finish report". It's something you do outside the body of the report code. "Exit report" is like "exit foreach". It's something you do inside the body of the report code. Both of them stop the report dead in its tracks. That's probably most useful.

"Terminate report" allows you to have a two-pass report and collect 60 rows of data. If you then find that there's something seriously astray, you don't want to actually output any of the data so you do a "terminate report". The two-pass report then just throws away the temporary table that it's been accumulating and doesn't actually print anything on your output, which is nice.

"Exit report" is if the report function itself detects a problem and wants to terminate the report unilaterally. After you have done an "exit report" the only valid operation you can do is "start report" again.

CD: This one could be for Kevin or for Paula. It says, are there any plans to incorporate Informix Universal Web Connect functionality, and the example here are functions calls, into 4GL? Now understand that because these questions are coming out of the blue, it is perfectly acceptable for these people to say, we don't know but we'll find out.

PH: Web Connect is a product that belongs to Brett. He has product development for Web Connect, and so Brett was just talking with Kevin. We haven't actually ever had this question before, so we find it an interesting question.

BB: Maybe we could get a little clarification of what the intended application of such a feature would be.

CD: The question is for whoever submitted the question, could you give us a better sense for what you might be trying to do that would motivate the requirement for these features?

BB: Who was it that asked the question? Are they still here? Okay.

{comments from the floor}

BB: All right. So I think the general spirit of the question might be, for those who have made investments in 4GL business logic or potentially other languages, is there a possibility to offer web connect functionality for dynamic web publishing and the creation of mid-tier business logic for direct access to your web clients in those languages? My short answer on 4GL is this is the first time that specific question has come up. It's a good one and we'll take a look at it.

In general we have been looking at moving from C to offer C and JAVA as an alternative so that we can get leverage across the JAVA Data Director work, and I think that would be a natural evolution from C to include JAVA. 4GL I understand could be even more important.

CD: This kind of ties into one of the functions of the IIUG and that is to act as an advocacy body for you, the users of the products and services that Informix has. One of the things that was established this past year was a very formal process through which that is done.

Paula and others participated in the first of these advocacy meetings where we sat down and spoke with the people who make the decisions. We said these are the concerns that we, as users, have with the products. They have been very willing to listen to us and our input and they also provided quite a bit of information back that we will be sharing with you as time goes on.

Questions such as this, or a need or desire such as this can be effectively funneled through us to these people rather than going to your sales rep who will probably round file it anyway. Because, unless it has to do with commission¼ I mean, let's be honest, unless it has to do with commission, it's not going to happen. Okay?

Next question, what are the advantages and disadvantages of using ODBC and what Informix direct-connect alternatives do you recommend and why?

?? DT: The latest release of ODBC 2.7 is a direct home implemented version of ODBC, so it has no layering overhead that we've had in the past, so from an Informix point of view it ought to be equivalent with ESQL/ C. If you're programming environment is ODBC and you like that style, then there's no disadvantages. If you prefer sort of a higher level abstraction, then the disadvantage is that it's not ESQL/C.

On the I-US side, the ODBC driver currently does not support all I-US data types so that is a disadvantage in that environment, but we're working towards making that statement false.

CD: Will the new Visual SQL Editor supplied with Data Director be available as a standalone product or with other development tools?

PH: Packaging issues, you know--can you take this part out of this part and put it as a part of another part or actually make it standalone or whatever. Packaging issues are things I always give to marketing to handle.

My job is to say whether it's technically possible and it's Brett's job to say whether they want it in any given package. I can tell you from a development perspective, I would rather not have a whole lot of standalone products. The reason for that is that we have to go through the re-certification and re-QA then of a combinatory suite of standalone products. If we don't, for sure, one of you all will find one of our products standalone that doesn't work with another product standalone. So my own bias in this case would not e to not make a standalone product out of it, but again, this is something that I always defer to marketing.

BB: So I'll add my two cents. At this point there are basically three ways you can buy tools-oriented products from Informix. First, there's 4GL. Second, there is NewEra, and third there's Data Director. Now when you buy Data Director you get a run time license for NewEra, so Data Director is a super set of all of our tools other than 4GL and if you want we can include a 4GL license in the Data Director environment. Again, based on the strategy what we try to do is make the Data Director umbrella package include everything that you might need to deploy an application regardless of the tool you're using, NewEra, Visual Basic, Semantec Visual Cafe Pro. I tend to agree with Paula, not only do we have the extra overhead of certification, but we also have the inventory and packaging and just the extra overhead of line item stuff.

I think it's unlikely we will create additional tools packages. For now, I think Data Director, right now, is a nice super set of all of our tools and I'm hoping that it'll meet all of your needs. If it doesn't we ought to fix that first.

CD: The last question, and let me see if I can kind of read it on the fly here, would probably be directed to Jonathan. It has to do with the growth in size of executables when you compile 4GL using the C4GL statement. It says that they're using the "- static" option, but then they also talked about the "- shared" option. I guess if you could talk just briefly about perhaps decreasing the size of the compiled application with compiler flags.

JL: Some of us were in the right section yesterday. I talked a little bit about this, quite a lot afterwards.

C4GL is the compiler script which is used to build executables with most of the current versions of the software. It has a pair of flags each with exclusive "- static", and "- shared". "- static" links static libraries into your executable. It leaves you with a large executable.

If you use shared libraries, "- shared" option, then you get smaller executables. Generally speaking, you can improve the performance of running lots of those programs upon a system because once the first program has loaded the shared library, all the other programs don't have to reread it off disk and they are therefore that much smaller. Also, in fact, the memory space that is used for the shared library code is shared between all the different programs; whereas if you have statically linked code, each separate executable has it's own copy of the 4GL libraries. So, using shared libraries reduces the load on your machine and you should therefore use that whenever it's an option.

There are probably a few platforms where shared libraries are just painful to produce. One or two platforms spring to mind, some of the older SCO stuff and some of the AIX versions. It's quite hard work making anything remotely resembling a shared library work. It can be done but mostly, I think, you'll find it isn't done. On the other hand, if it's a more recent version of most of those products, I think it is done. There are shared libraries on all the working platforms I'm aware of, but it depends on the version of the product, version of the operating system.

Generally speaking, if you've got shared libraries as an option, use it. The other thing to do is to make sure that you're not trying to fit too much code into your application program. The bigger [it is], fifty megabytes of object code which is specific to your program, that fifty megabytes is going to be loaded with memory every time the person runs it, and it's going to use up lots of space. If you can cut it down into one megabyte chunks or sub-megabyte chunks, it'll be better for you.

CD: Thank you. We've actually come to the half way point in this presentation. I'd like to invite everybody to stand up for no more than about five minutes because we have a ton of questions that are engine oriented.


CD: If we could have everybody sit down, please, we'll start again. These are all the questions that I have just gotten in the past couple of minutes for engine related things. Obviously we're not going to get to all these today because I have two pages up here that I've already got. What we will be doing is allowing these experts to respond to these off-line. We will be publishing a full synopsis of as many questions as we canon the IIUG site.

This one would go to Don Top. A hot question. When will Informix Universal Server go GA and what features and/or DataBlades will be included in that release?

DT: Actually I got special dispensation from marketing to actually mention a date. Normally when development gets asked for dates we're supposed to say, we can't answer that. But, in this case I do have good news. Our plan is to make 9.12 generally available next month. So you should be able to order that within weeks.

As far as the feature content, the list is very long. If you haven't seen what's in Universal Server, then I'd be surprised because that's been marketed very well.

In general, it's got tremendous performance advantages. In a sense it's what Illustra had. It's compatible with Illustra, but our early indications from many of our Beta customers is that the performance is anywhere from 5-10 and in some cases, 12 times, not percent, times faster than the same sort of operations performed in Illustra. So, the goal of having extensibility on a robust high performance parallel server, is essentially achieved. Clearly the extensibility features are there. As far as which DataBlades ship with it, is there a comprehensive list that someone has on the top of their head? I don't. I'm sorry. Why doesn't Mike give you a couple of details.

MU: The text blades, the image blade and a number of blades from third party vendors. Really, you should probably stop by the DataBlade partners booth, or the other Informix booth and they can tell you exactly what schedule things are on, or catch one of the marketing folks who are hanging out in the back of the room there. They can certainly tell you exactly what the schedule is.

But we'd have a good number of them working, some of them I don't even know about because they're done by third parties. They get them done, they work, they sell them. But the in-house blades, I believe, are shipping with the release at the end of the month.

CD: This one would most likely go to Don, and it comes straight from the comp.databases.informix discussion list a short time ago. When or will Informix products be ported to the Linux operating system? {laughter}

DT: Well, as the IIUG advisory council discussed with Erin earlier this week--Erin is our Marketing Manager, or Director of Marketing for this area of the products and she's responsible for the products making money, and so what she wants is the business case or the business cases to actually make that a product on our ports list. She has agreed to take input from you. She wants to talk to people, to hear from you if you believe strongly and can convince her that it would be in Informix' best interest to port to the Linux operating system.

CD: I can tell you that there is a committee formed within the Informix User Group Leadership Council that is specifically addressing that issue. Tim Schaeffer is heading up that effort on our behalf with other board members and he will be soliciting your comments again. It is part of the IIUG advocacy program, and we would invite you to participate in that so that we can provide a unified voice to Informix, build the business case and go from there.

Another excellent question from CDI, when will the eighteen character table and/or column name restriction be relaxed in Dynamic Server? Or any of the Informix engine products?

DT: There is already a development effort begun to achieve that. Clearly that's a feature change that is quite sweeping throughout the product and so we've begun that development already now and expect it will be available in all three releases of the product next year.

CD: Is it possible, or is there a way to restrict users from having the ability to run the "create database" command within an instance?

DT: Usually you like greater flexibility and now you want to impose restrictions!

In general our authorization levels relate to a database, and we have yet to put a good plan in place for a general system authorization plan. We recognize this is being asked for. We obviously need to begin putting plans in place to permit further restrictions on things like this or maybe limiting them only to a specific dbspace. I can see where maybe having permissions on dbspace for creation of objects within those, whereas an instance may have a public area that you would allow certain users or any user to create objects in. But you'd want to restrict them from creating stuff in your critical areas. So, we have heard this, we do need to begin planning for it. I don't know of it in any of the current plants that reach the street any time soon, though.

CD: This would probably go best to Jeff. It says, what is the fet_array_size, and they don't know whether it's a program or an environment variable or a program variable. How is it set and can it be used to enhance performance when storing or retrieving blobs?

JF: Well, to be honest, I could not remember what fet_array_size was, but I know what fet_buf_size is and if that's actually your question, is the person who asked the question here? If not, yes, you meant fet_buf_size?

{Comment from the floor}

JF: I'm sorry, I'll tell you about fet_buf_size. I answer questions I know and not the ones I don't. The fet_buf_size allows you to change the buffer size used in the communication between the client and the server. You can increase that up to about 32 KB, I believe it defaults to 4 KB. That means that except for "cursor with hold" and scrolling cursors, you will get as many rows as possibly can fit into that fet_buf_size.

Now with respect to blobs, this does not help blobs. It only helps rows, non-blob rows, and the only other thing of interest is that if fet_buf_size is used, there's one buffer per open cursor.

Oh, I'm sorry, and the way that you establish it is in the environment, by setting an environment variable so it can be set outside of the application. So existing applications don't have to be modified in order to utilize this feature.

{Comment from the floor}

JF: Not exactly but if you will get the session notes and the tape from session C7 on Wednesday, there was a great deal of additional information there.

CD: This is an interesting question that actually would probably come closer to Clem's neck of the woods since he works in International Technical Support or Troy. It says, will Informix ever release support utilities such as TB0, TBUP, and so forth?

CA: I'll speak a little bit about that. No! {laughter}

Let me explain. Even inside Informix the power to use those utilities is very closely guarded. Those utilities are something akin to the UNIX dd utility in that they write directly to Informix data structures. The improper use of those would corrupt the database beyond all hope of repair and in such a way that you might not even realize it. It's very, very difficult to use those utilities safely. They require an extremely detailed knowledge of all of the data structures and the inter-relationships on disk or in memory, accordingly. The additional levels of training required to use those utilities safely are currently only offered internally at Informix and even then only to a few people. So, that's the reason why.

Those are very powerful and very dangerous utilities. One potential avenue that you could pursue to address that issue for yourself is to lobby Informix for additional training or more training on structures, to enroll in the internal architecture course that's currently offered and to find out more about those structures. My feeling is that with more knowledge about what those utilities do and what structures are on disk already, you'll realize how enormously complicated it is and have a better appreciation for why the answer is no.

DT: Let me just add a little bit to that. I believe each of these tools was developed at some point along the way to assist getting around some problem that had come up, whether the logs are completely filled or a chunk had been disabled for a reason that you know didn't compromise the data on it. We have put features into the 7.2 product line, which obviously is also the 9 product line, that have helped get around some of these problems.

ONDBSPACEDOWN for instance allows you greater flexibility in what should happen if a failure occurs while the engine is processing an I/O to a chunk. It allows you to do something other than just go ahead and mark the space as disabled. Clearly you know once the space has been disabled and a checkpoint has been completed, that's when you usually call tech support and want to up the chunk. You know it wasn't anything catastrophic, it was someone tripping over the cord to the device or some other reason. There's documentation in the 7.2 literature on the ONDBSPACEDOWN parameter to let you control what action we take at that point.

We're also working, for a future release, on giving you more flexibility in terms of re-enabling spaces. Our fear is that just marking things on-line and letting you proceed without knowing whether there are inconsistencies introduced, can get us into a lot of trouble. So what we'd rather do is make sure you have all the facilities to verify your data once we do re-enable spaces without a full physical restore to insure index and other constraint consistencies.

So, we hear that part of it and I think what we really want to hear is what areas are causing you to have to resort to these utilities.

CD: The question is, how is it determined when a tech alert is issued?

CA: I guess that would be mine too. Tech alerts are, I guess if you don't know, the mechanism whereby Informix provides public notification of defects or problems that could affect the customer's data. Tech alerts are issued when an Informix support engineer feels that there's sufficient justification.

Normally the front-line support engineer is working with the customer. He or she recognizes there's a problem and feels that this problem could be widespread enough to justify notification through the tech alert process. That engineer then passes that opinion on to the support planning team. The support planning team evaluates the situation and then decides, at a higher level, whether or not to propagate that tech alert.

It's recognized inside the Informix support organization that this is a less than perfect process. I spoke this morning with the Manager of the Support Planning Team, and she informed me that, indeed, they have already formed a team to evaluate this process and are working on it currently. So I took that moment to put in a quick plug for the International Informix Users Group to ask her to solicit input from users, and also to possibly provide a mechanism for end users to request a tech alert more formally.

So, I think that process is under way and that something like that would provide the mechanism for end users to give their own opinion and to formally recognize that end users have an opinion that's invaluable about when a problem should be elevated to tech alert status and propagated throughout the user community in a proactive manner from Informix.

CD: When a patch release or when a patch version is released of the software, why must I as a user hunt that down from Informix? Why isn't there automatic notification of bug fixes? {applause}

CA: This is a complex question. Let me begin with a couple of definitions. Releases are accomplished in a step wise fashion such that we have major releases, maintenance releases, or patch releases. Patches then are a really just desperate fixes for a customer's problem and they often do not see the full Q&A validation process.

So the question was specifically, why don't I get notified about patch releases? And that's because patch releases aren't really for public consumption. They're an agreement to fix an immediate system down problem for a certain customers. And it's not suitable for propagation under every circumstance, so it's difficult to do that.

In addition to patches, the natural follow-up question then is, okay, why am I not notified about minor releases or about interim releases? And the answer to that is a little bit harder. We don't have, from the support side, a database of exactly who has what version and who needs to be notified about which product. We've taken steps to provide for that information on our web site.

The answer then is at current, you won't be notified, but you have the ability to find out about bugs and about interim releases. The best answer I can provide for that then is to monitor the web site periodically. This is under the Tech Info section of the web site. It costs you money to do that but it's included in most support contracts. It's considered a value added for the support contract.

So, my answer is, we do provide that information publicly but it's up to you to monitor that database. I spoke with the support planning people again and they assured me that this database is being kept even more up to date than it was before. They recognize that the web site is an important avenue for information and they've assured me that with the awareness of the responsibility to check this web site periodically, they'll increase their refresh rate, if you will, on this database and make it even more useful.

Along with that responsibility then, comes the opportunity for you to provide feedback to Informix. We have a feedback button on the web site. If what you see there is not suitable for you, rather than just complaining amongst yourselves, then use that feedback button and provide your opinion on a better solution to Informix. I think that answers all I have to...

DT: I guess the applause to the question probably was one big feedback to the thing, so clearly we need to re-look at that whole thing. Seems like a great opportunity for a publish and subscribe sort of application to get notified.

PH: I just wanted to add, is this on? I was in charge of support at Illustra and absolutely, you do not want everyone to get a copy of a patch release. The whole definition of a patch release is that within 24 hours you've got the bug fixed and you're out at the customer's site making sure it fixes their problem. They already are in the mode of "let me just try this and make sure it works." You don't go through what, at Illustra was a 2-month process; it's a longer process at Informix, of making sure that this release works in every way. That that one little fix, and you all are software developers and you know that one little fix could, in fact, have broken something else, and you need to make absolutely sure you haven't broken anything else when you push a release to as many customers as Informix has.

So under no circumstances would you ever want patch releases to be pushed to everyone, but Informix has a very good process for moving the patches and patch releases back to being bug fixes and regular releases so that everyone gets the benefit of that bug fix, it just doesn't happen instantly, it happens over time. That's working, that's a good thing.

Now at Illustra, we did something that was rather revolutionary, we published the list of open bugs with every release. And I meant ever bug. I meant if a customer called and said, you don't have the thing that Oracle has to do this and I consider that a bug, that went into the bug database and it went into the list of bugs that we put out. The problem with that was that what someone else considers a feature, someone considers a bug, suddenly you've got pages and pagers of stuff to read through and nobody was reading it. So then you had the problem of nobody reading all of these bugs except, of course, the competition who was very happy to read it to your customers, potential customers, and at Informix they don't produce the list of open bugs with every release. The support people have it so that they can see what it is, but we don't actually produce that.

I consider that whole issue as an AI kind of issue. It's the kind of issue that Brett's group with Answers Online is now getting into, how do we adequately describe what other people have complained about this release so you already know someone wants a feature, someone considers something a bug so there is work that's going on in that area in the Answers Online area.

BB: So I have a question for the folks out there because with the Answers Online product, which I hope many of you are familiar with, we offer web-based delivery of our documentation. We also include that on the cd-roms that ship with all our products. The next evolution of Answers Online is an ability to do topic search based on keywords and other information that's extracted from the underlying documentation, either on a web site or on the local copies we give to you. And furthermore, to subscribe to subject area updates, so that you would get e-mail notification of potentially other kinds of web push notification of changes and the reference material that's covered currently with the body of answers on-line.

Now I would think it would be possible to utilize that same basic mechanism to provide at least notification to people who register their interest in a particular release area to be notified of new releases. So, I guess I open it up to whatever form is appropriate within the user group community here to give us some feedback on whether or not utilizing a subscription mechanism on the web to allow you to go up there and say these are the areas I'm interested in hearing more about and I want to get E-mail and I want to get web push through some channel, and tell us a little bit about what you'd like to see.

Our first focus was on the documentation materials but clearly we could think also of just basic notification of the availability of bug fix and releases and so forth. So if the organizers of the group here could help channel that information in then the group that I have that's working on answers on-line might be able to work with the tech support guys and add a high value added service at a very low incremental cost.

TH: One quick side note. For those of you who do take the time to read the release notes that come with the product, you will find that at the bottom of the release notes they have started including both open and closed bugs for all of the product lines for that particular platform when you receive it so you might want to look through that and see if there's something in there that addresses a specific problem that you were having.

CD: Is it possible to load more than one NT based OnLine product on the same server and run them simultaneously. For example, WorkGroup Server and ODS? Or personal OnLine and I-US for NT.

DT: At this point no. That is going to be included in the subsequent of ODS and I-US. It's because of the complexities of the registration of the product with NT and that so far has not been completed but it's in progress and, like I said, should be in the next release of both 7 and 9.

BB: And I'll add that that's key because it provides us with the ability to support active-active fail over with Wolf Pack phase I. If you want to restart an instance on top of another one in a two-node fail over configuration like Wal-Mart's going to do.

DT: And that too is in the next release of 7 and 9 on NT.

CD: This person writes that I'd like to better understand the output of ONSTAT commands such as ONSTAT -G SCH. Will there ever be more enhanced documentation of what these fields in the command output actually mean? {applause}

TH: That an applause item? I would say that there's probably three answers to this and the first two are pretty obvious, you'll know them. First and foremost being the OnLine documentation and manuals. I do know it's a little limited in what it provides. The 5.0 [manuals] really got into some detail in some of it. The 7.0 [manuals] backed off a little bit on that.

The second thing you could do is, of course, and this is throwing out some marketing here for our training. When you go through our training courses, the class manuals tend to go into a little bit more detail on what the ONSTAT commands have.

Thirdly, and unfortunately I don't remember the web address off the top of my head, but there is a web address out there where somebody's put together a complete list of the ONSTAT commands and what each of the key items from it's output represent. I'm probably like most people, I'm used to just clicking on favorites, clicking on the item I'm looking for and, boom, it takes you to it. So how often do you memorize the address. What I'll do is I'll provide Carlton with that web address so that they can publish the information on the IIUG web site and then that way you'll all have access to it.

CD: Thank you. What is the proper way of correctly sizing the physical log file?

TH: Oh, I'll give two of what I would consider basic answers.

One, and this is based upon experience. I found that at a minimum you want to set physical log size at least 4,000 pages. That's because in a system that's first being initialized, if the physical log's not large enough, you're going to have problems when the system master database tries to get built. I've probably seen at least a half a dozen sites over the past two or three months that ran into that. What they'll see is it'll get so far through the sysmaster process, they'll see their logical logs getting filled up, and then it just stops.

Now the engine is not hung, it's simply waiting to continue finishing the sysmaster build. What you find is, if you shut down the engine, resize the physical log and then re-initialize, it works just fine. That's short term.

Long term as far as sizing the physical log, it really should be sized just like the manual says to allow for your check points to occur at the 75% full mark. I've been to a lot of sites where what people will do is they'll give the physical log this arbitrarily huge amount of disk space, 10, 20, 30 percent of the size of their root (dbspace), and they really don't need it.

What I would say is, start off with what you would consider to be an average sized physical log--and that might be something on the order of 20 megabytes, 30 megabytes, even 100 megabytes, depending upon how much activity you expect to hit the system. You're going to have to monitor it over a period of time as the normal transaction rate hits your system. As soon as checkpoints occur, check the checkpoint occurrence, which should be at checkpoint interval, and compare that with the how full your physical log is. If your physical log is only about 30% full when your checkpoints are occurring, you could probably reduce the size of the physical log.

One of the things I've seen recently on our tech mail is a lot of our engineers are recommending that you just push the checkpoint interval to the ceiling and let your physical log drive your check points. So what you do at that point, of course, is just continue to resize the physical log so that once that 75% mark is hit, boom, the checkpoint will get forced for you.

DT: Let me add just a bit to that, Troy. I think a follow-on question would be, okay, well, what's a good check point interval? You’ve got to figure the checkpoint interval is approximately the same amount of time you're willing to wait for recovery to complete should your system fail. Right? If your engine crashes due to an OS problem, a bug, or the power goes out, you want to restart that thing. If your checkpoint interval is 45 minutes then you could expect, worse case, 45 minutes before logical recovery is completed because that's how much logical log activity needs to be reprocessed.

So I think the first question to answer is, how long are you willing to wait for the recovery to take in the case of the failure? Once you've set your checkpoint interval then you want to go backwards from there, size your physical log so that it's getting 75% full doesn't necessarily trigger your checkpoints sooner and cause other degradation as can occur if your checkpointing too frequently.

The other thing is the logical log size. That's the case that happens when you've got your sysmaster building and again, you don't have to resize your logs to get sysmaster built, you can back them up. If you back them up (to tape) then once the first log is completely backed up, it will free up and the sysmaster build process can continue. Clearly you want enough logical log space there so that when the high water marks for long transactions are approximately 50%, your longest transactions can compete within the space allocated. And I'm sure I just opened up a whole bunch of follow-on questions in that whole area.

TH: Actually just as a follow-up to that one thing. I just want to make sure that you're all aware that in the situations that I've seen when sysmasters stops building, the logical logs are not whole(ly to blame). I've seen this like on AIX. I've seen this on Solaris platforms. As soon as we set it to 4,000 pages, boom, sysmaster builds just fine. I haven't had a chance to research it to find out why it occurs. But just to let you know in this particular case we had like, I guess we had 20 logical logs. Only 2-1/2 of them were full and yet it still stopped. So just a quick FYI.

CA: Yeah, there you go, tech alert. {laughter}

CD: This is rather an ugly open ended question but I'll ask it anyway to Jeff. What is the most efficient way in terms of performance and throughput of running update statistics?

JF: Well, published in the release notes is an outline, which I provided, that lists how you should run update statistics to get the most benefit. Rather than outlining how to do that because it's a little complicated, the reason behind the complication is to minimize the overhead. If you, as an example, were to run update statistics high so you get high mode distributions on the table, you're also doing the updates statistics low by implication, and that takes time. So, what is available here, as an example, if you run update statistics high and you list only a single column and that single column heads an index, you don't do a sort. You use the index in order to get the order required for the high/low distributions.

All of this is listed in the release notes, and while I wish it were less complicated, it should produce the best distributions with the least amount of impact on your system.

Also, just so it's clear, we do not lock any of your data tables and we only, with an exception to be noted, we only lock system catalogs when we're updating the sysdistribute information and the sysindexes information you get on each column or each index. The exception to this is if you open a transaction before you run update statistics and you set your isolation level above dirty read. Then what will happen is we will lock these things and we'll keep them locked until the end of the transaction when we commit. One of the things that's not mentioned in the release notes is that you should not put your system into a transaction when you're running update statistics. We will automatically protect all of that for you. If you have a mode ANSI database then you should set your isolation level down to dirty read. That does not mean that your logs are unprotected. Your logs are still protected. It's just that this way you'll have least interference in your system.

CD: While you're standing up there let me ask you another question. Is there a way to force the optimizer to use a specific join or index path when processing a query?

JF: This request has been made over the years and in ODS and I-US we're going to have a release which will include something called Optimizer Directives. This will allow you to give hints to the optimizer about things you'd like to do. This is an enhancement over competitive products offerings in that we offer things like you can specify not to use a specific index and other additional features. I won't go into the list.

Of course the one caveat then is you have to become an expert in how to use the directives and what it means to choose particular paths. It also means that when your data changes and your directives maybe aren't the appropriate directives now that your tables have gotten larger or smaller, you will have to adjust the directives accordingly because we will follow your advice.

DT: Do I get a chance just to say a little bit more about that and other things that are going to be in the next version 7 release? I know we're getting towards the end here and we've been talking in general terms of what's happening when. Optimizer Directives is scheduled to be included in the next version 7 and version 9 releases. There are other SQL compatibility features that are targeted for that. The NVL function, the decode function, string and bait manipulation functions, upper, lower, init cap. I know that, {applause} you may applaud, that's fine.

There are in addition to that, substantial performance enhancements planned. We've got significant improvements to in place alter table where not only can you add columns but you can drop and modify types without having to do a table copy. That'll be in place.

We're adding the Enterprise Command Center to the UNIX distribution. It'll be running on NT, of course, but it will be able to control all three flavors of server on both NT and UNIX.

We'll be shipping our own Storage Manager to go with OnBar. There will be Enterprise Replication enhancements.

So I know one of the other questions, Carlton, was well, where's the long term plan. Our plan right now does include yet another substantial improvement in version 7 functionality and all of those things I've mentioned will be there, as well as some RAS features, stuff to help support, diagnose problems should they occur--have the engine itself self-diagnose some problems. Let's see, what else am I missing here? Oh, some additional availability features besides the in place alter not requiring a complete table copy, allow attach and detach of fragments to not require rebuilding all of the indexes. That'll be more of just a manipulation of the required meta data. There's the case statement that will be included.

So as you can tell, there's a number, I think, Brett mentioned Wolf Pack. That will be supported in the next NT release of the server as well.

CD: I'd like to ask the panel one question. We are literally at the end of our allotted time. Would they be willing to spend an extra fifteen minutes responding to questions? I know we took a little bit longer break . . . .would you be willing to sit around for an extra fifteen minutes? Okay.

Next question. When does Informix expect to be fully SQL 92 compliant?

MU: No vendor is at full level. Very few are even at transition level which is the one above entry. So, I think we need more detailed feedback on what exactly are the features that you want. It's an enormous engineering effort to do full and we'd rather not do it if there's only a few features that people want out of that. That's sort of my answer to a when question.

PH: When this came up at the Informix User Group Leadership meeting, it turned out there was a little confusion. Really, I think, most people want us to add more Oracle compatibility rather than standards compatibility.

Oracle is not full SQL 92 compatible, so if this is a question that's really, you know, when will you add Oracle compatibility features, that's really different. I think, we need to understand that precisely because we are engineers and if the command is given, thou shalt be full SQL 92 compatible, and that's not compatible with Oracle, we'll do the full 92 compatibility.

So I think that's the overall question is, like Michael was saying, what is motivating this question, what is it that people are not seeing that they want to see.

CD: Can someone briefly address the strategy of JAVA in the engine?

MU: So in I-US we implemented the ability to write user defined functions and procedures through a fairly general interface, an internal interface. The first instantiation of that interface was in C. We have a project that is about at the alpha stage to allow that in JAVA.

What that means is that you can write functions and routines in I-US, in JAVA and call them through as SQL functions, I guess I got that backwards. You can write methods in JAVA, which then get invoked as procedures or functions from SQL. This project is pretty far along. We do have a working version in house and will be going out over the course of the next probably, 6-9 months as far as availability.

CD: How do I change the owner of a table without dropping and recreating it?

BB: I'm sorry, that ability does not exist and so that looks like something that you'd probably want to give feed back to the channels that have already been mentioned that that's a desirable feature for you.

CD: That's too short of an answer. I got to come up with another question here. Here's a good one.

PH: Wait a second, while you're doing that. Brett and I wanted to spend just a couple of minutes talking about the whole JAVA strategy. The question, when will you have a native JDBC driver which came up, was really a question that is a part of a larger context, which is what is our total JAVA strategy. We have a very strong JAVA strategy. Did you want to take it?

BB: So we talked about JAVA in the server. It's running at Informix now. It'll be in customers' hands later this year.

On the client side we're offering basically two levels of JAVA Interface, Native JDBC, and it is available. The other option, which is, I believe, the preferred option for many of you, and this is similar to the discussion that Mike went through for ODBC versus ESQL and some of our higher level APIs, is our Native Component JAVA API. That is now generally available for the Universal Server. So, those of you who want to write client side JAVA applications exploiting the full features of the Universal Server can do that today with the Universal Server.

Now in addition, we offer a limited capability to do the same thing. That is to write client-side JAVA for OnLine Dynamic Server, WorkGroup Server and Workstation. That's included as part of the JAVA Data Director product. By the end of the year we will open that up so you can write client side JAVA using any tool you want. Even if you don't use Data Director against the OnLine Dynamic Server, you can build business logic in JAVA on the client-side using our Native APIs, and transparently upgrade that from OnLine to Universal Server where you can then exploit the power of the Universal Server.

So we got a JDBC as well as a native JAVA capability. There are third party JDBC solutions available now for Informix if you want to go with JDBC, and our native interface is available today as well.

CD: What, if anything, can be done to improve the performance of correlated subqueries?

JF: First let me say that we have in XPS, and there's work now ongoing in both ODS and I-US, to automatically unroll correlated subqueries for you. Without that support, what you have to do is examine your queries--and many of them can be rewritten either into joins or they can be rewritten by taking the subquery, placing it into a temp table and then doing some appropriate manipulations. I guess for the time being the only solution is to learn how to rewrite them yourselves into joins. In the future you should take hope that we will have solutions for you. Like I said, it already exists in XPS and will exist in I-US and ODS in the next releases. They will automatically do that for you.

CD: It has been said that the ODS, XPS and I-US code streams will eventually merge. Is there a projected date and/or version when that will occur?

DT: Let's turn the question into when will all the functionality be available in one product because I don't want to really get into how we would accomplish that inside our code lines.

It is true that portions of our code lines have gone and gone for each of the three different products. Some are still together, some are already in the process of being merged together, but that's not really, I think, what's behind the question. I think the question is when will we have all of the version 7 functionality in XPS? When will we have extensibility in XPS?

Maybe what I'm already hinting at is the plan that we have in place, and I don't think I'm speaking out of turn here, the plan is to have version 8, our XPS server, be fully compatible and as fast if not faster on a single box.

The project to follow that then will be to put the extensibility that we've already put on top of version 7, the ODS++, and put it on top of XPS as well. Exactly how that happens, whether that's a merge from one to one or the other to the other, I don't think, you're that concerned about, but are looking for when it will be available. I think it's probably going to be phased in as opposed to one product will have everything. You'll probably see the beginnings of some of that late, late next year, early '99, probably with completion some time after that.

CD: Is there a way to upgrade a DataBlade to a newer version beside unregistering the old one, which can not be done if the database already has data of that complex data type?

PH: The DataBlade Developers Kit is what people are using to register and unregister DataBlades. This is probably our top functionality request for that.

The short answer today is no. You do have to completely drop them and recreate when you do this. But this is a definite part of a newer release of the DBDK, the DataBlade Developers Kit. I would say it's probably our top requested functionality for it, and will be in another release. I don't have the exact date and wouldn't want to promise it right now anyway until we know everything else that's for that release.

CD: In the UNIX operating system you can manage permissions or functionality through the use of groups. Is there like functionality in the Dynamic Server Engine?

JF: I think the answer to your question is roles. With roles, which are part of 7 and part of 8 and I think maybe even 9 XPS, you can specify that a particular ID actually has, that a user can take on the attributes of a specific role which you set aside and give permissions to. That gives you that functionality.

CD: Last question, and this is actually from the Manager of the User Group Program, has a session such as this where you can interact with the audience of benefit to you, would you think it would be of value to do next year, and would you be willing to do it again next year?

KW: Oh, I can speak unilaterally towards that. I think everybody appreciated this opportunity to talk to you, to hear your questions. I believe we'll all be here next year for a similar conference. You bet.

CD: Okay. I'd like to say thank you to the panel. We'll get this information to you on our web site as soon as we can.

Book Discounts Available to Informix User Group Members

Thanks to the work of Gavin Nour, and others, the IIUG Board of Directors is pleased to announce that two publishers have agreed to give IIUG members discounts on the purchase of Informix-oriented titles they carry.

Informix Press/Prentice Hall

Prentice Hall PTR and Informix Software, Inc., a leader in enterprise database management systems and tools, have teamed up to deliver authoritative books on Informix database programming, administration, and application development. IIUG members can receive a 30% discount off all Informix Press titles from Prentice Hall. Members are invited to contact Advanced Information Technology, Inc. for fufillment by calling 1-888-906-8410, faxing 1-732-906-8411, or by sending email to Advaninfo@aol.com. ALL ORDERS MUST REFERENCE THE ORDER # INFX2 TO RECEIVE THE 30% DISCOUNT.

Sam's Publishing

IIUG members will received a 30% discount on Informix Unleashed, ISBN 0-672-30650-6. Members should call 1.800.428.5331 and state that they are IIUG members wishing to take advantage of the 30% discount on Informix Unleashed. The account number is 11285730. All orders must be pre-paid with a credit card and will only be shipped to an address in the US.

Announcement - ARCUNLOAD
a utility to extract a table from a level-0 archive (tbtape/ontape)

(Editor’s note: A wonderful new utility has been posted to the IIUG web site. This is a brief description of ARCUNLOAD from the web pages.)

CAUTION! Use of this utility has the potential to crash and/or corrupt your OnLine instance! Please read these instructions completely before using.

The arcunload utility extracts a table from a level-0 archive (tbtape/ontape) and writes it into a file on tape or disk in tbunload/onunload format. It can then be loaded into a database using tbload/onload.

The table will be re-created to its state at the time the archive was started. If there were open transactions against the table at the start of the archive, this could result in loss of transactional integrity. In addition, it is theoretically possible for index or other corruption to be introduced as a result (although this has never been observed in testing).

Because tbload/onload could potentially crash the engine or leave the database corrupt if the unload file is corrupt, it is strongly recommended that a test OnLine instance be created for loading the table. The resulting table should then be fully checked for corruption (using tbcheck/oncheck -cI, -cD, -cc, -ce, and -cr) before copying the data into the target system. It is recommended that you copy the data to the target system using a remote query (INSERT INTO ... SELECT FROM ...) or an ASCII utility such as UNLOAD/LOAD to double-check the integrity of the data.

This utility is not supported by Informix, and Informix will not take any responsibility for repairing any corruption to an OnLine system resulting from the use of this utility, whether proper or improper.

Arcunload is available only in executable format. The hardware platform, OS version, and version of OnLine from which the archive was taken must match exactly the target system for the load and the port of arcunload.

This version does not support:

    Blobs in blobspace
    Remote tape devices
    Fragmented Tables
For more information or to download this utility visit the Special Software Section of the IIUG web site: www.iiug.org/software/special_software

WAIUG Corporate Membership

Corporate membership is available to companies who wish to participate in user group activities. The benefits of corporate membership include:

All the benefits of individual membership for up to 12 individuals, at a reduced cost. (Additional members may be added if needed at the individual membership fee.)
One designated point of contact from the corporation can add and delete individual members. The membership stays with the company, not the individual, should an individual member leave the company.
Company purchase orders will be accepted for user group activities.
The corporate membership fee is $200.00. This allows a company sign up to 12 individuals and be invoiced for the membership fee. Members will receive four newsletters and all membership announcements and mailings.

Many thanks for the support of our current corporate members:

  • HQ Defense Courier Services Interealty Corporation
  • London Fog Industries Marriott International
  • National Association of Securities Dealers ProLink Services L.L.C.
  • Reynolds Metals Corporation Sallie Mae
  • United Communication Systems Upgrade Corporation of America
  • U.S. Order Vector Research, Incorporated
WAIUG Sponsorship

The user group has been supported by many companies over the past years. The major financial sponsors of the user group have been:

  • Advanced DataTools Corporation Summit Data Group
  • Business Systems Support Group, Inc. Informix Software, Inc.
  • Pure Software, Inc. Compuware Corporation
The options listed below are available for companies who would like to participate in our activities. Please contact Nick Nobbe, Program/Sponsorship Director, at 202-707-0548, or Lester Knutsen, President, 703-256-0267, for more information about sponsorship opportunities.
Presentation at Meetings - We plan on one presentation per meeting from vendors that have products that work with Informix.
• Newsletter Sponsorship - The newsletter is produced quarterly. Each mailing goes to over 900 users in the Washington area. Companies sponsoring the newsletter may place a one page ad. This is a great way to announce a new product or job opening to 900 Informix users.
• Local Forums - We have held three one-day Forums for our members, offering numerous seminar sessions and an exhibit hall with 10-14 vendors demonstrating products which work with Informix. These events have been attended by over 200 people, and have been a very exiting way to share new developments related to Informix database software. Exhibitors have found this to be a very worthwhile event targeted at Informix users.

 This newsletter is published by the Washington Area Informix User Group
Lester Knutsen, President/Editor
Washington Area Informix User Group
4216 Evergreen Lane, Suite 136, Annandale, VA 22003
Phone: 703-256-0267