SECTION V
Interactive Modes of operation
Initial
deliveries of GCOS systems were for batch processing operation that was the main objective
of business data processing served by Honeywell and Honeywell-Bull. Engineers by designing
the above features have been objected that they were more ambitious than the market
required. However the market changed progressively during the 1970s. The traditional
way of planned batch processing was to be replaced by real time transaction processing.
Such a change was already perceptible in the mid 1960s, but had been matched with ad-hoc
solutions on systems like GE-400 or 600 and IBM S/360. The challenge was to incorporate
those new requirements in a general-purpose operating system.
Interactive
Operator Facility
Around 1980,
there was a requirement to build limited "time-sharing" facilities within
GCOS. The goal was essentially to provide interactive debugging facilities to programmers
and also to port some large applications needing interactions. There were alternative
solutions to the one what was chosen. TDS subsystem was already available, but the nature
of TP operations was specific and too restrictive: specific programming conventions,
limited working set of TP's etc. Building a new subsystem has already done with IBM TSO or
GCOS III TSS. It was preferred to extent the operator facilities and the JCL to build the
interactive facilities and use the standard system for the rest of the environment
(programming facilities, dispatching...) So the IOF environment could flexible ad
infinitum (almost).
The architectural
implementation was to allocate a process group (one J) per user and spin-off J as
required. The total number of J was limited to 255, that number being an architectural
limit to the number of IOF users. Later slight modifications to the architecture
introduced by Japanese remove somewhat that limitations. Anyway, the number of IOF users
rarely exceeded 10 on a GCOS7 system.
JCL that already
had some powerful capabilities was extended to be used interactively and renamed GCL
(generalized control language) and facilities built-in JCL but what had been of limited
use (for GCOSes compatibility reasons) were revived, giving to GCL most of capabilities of
a shell language. However, the heritage of a batch language restricted the primitives to
the commands already available. Instead of developing features like regular expressions
and pipes, it was chosen to augment GCL by a MENU processor offering a formatted menu to
the user and including default-options for most of the parameters. The Menu processor was
run in a listener thread allocated to each display terminal. That thread passed a GCL
stream for interpretation by the job scheduler server; if the GCL included the launching
of an interpreting processor, such as the editor or BASIC, or an user interactive program,
the remote terminal was allocated to that processor until its termination. IOF keeps
listening for a BREAK signal that could interrupt the operation of the interactive program
if the operator wished to get out from a loop or another faulty behavior.
Another layer was
added to the console mode of IOF operation, initially available. It was a menu
handler based on VIP display alphanumeric terminals and allowing application programmer to
modify the terminal interface. A default system menu was substituted to the command
interface.
IOF was the
environment for several programming languages: BASIC, APL. The environment of those
languages was quite different from standard programming languages. There was an
interactive editor/parser followed that spun off an execution job step (the byte code
interpreter).
IOF was used also
with an text editor used essentially for entering programming languages source code and
the documentation (the latter during the period where PCs were not yet a common toll (in
the early 1980s). The editor was complemented by a formatter named WordPro that
work in a way similar to troff.
Although, Bull
made an extensive usage of IOF for developing HPL programs for GCOS itself, specific
languages processors were developed for IOF. Most of them featured an interpretive
execution.
BASIC was the first language implemented in the late 1970s. As soon as a "line"
was entered, it was parsed and translated in byte code.
APL was also implemented under IOF in a similar environment. Special keyboards were
supported. APL was used by some standard applications developed by software houses and
ported from IBM VM/370 environment. Such combination was consuming much CPU resources and
an attempt to microprogram an APL interpreter was undertaken, but finally cancelled. A few
application programs were written in APL by a French software house.
A LISP interpreter was also written under GCOS. While the interpreter itself did not
require special features, the "artificial intelligence" mood of the early 1980s
cause several projects to consider LISP as a hub for many interactive applications (one of
which being the famous "automatic configurator" publicized by DEC and seen as
"the" solution for assembling complex systems. The configurator was written in
KOOL that generated large LISP data set, regrouping in the same set "procedures"
and "user data". GCOS offered a large 4MB segment for storing that text, but
processing a dozen of "configurators" in parallel lead to an excessive working
set in the system (in terms of paging misses as TLB flushing). GCOS had nothing to solve
that challenge, as did many other systems, probably at the origin of some discredit for AI
languages.
IOF lacks the
interactive features invented in workstations and X-Windows became popular too late to
have influenced GCOS operating system. GCOS never lad and would have been to be deeply
modified to be a windows systems. The market of GCOS after having been in batch processing
definitively moved to a role of data base / transaction processing server.-
Transaction
Driven System
The basic
architectural constructs of GCOS were not directly matching the requirements of
transaction processing. The number of terminals (several thousands) connected to such an
application were to excess the architectural dimensions. The overhead implied by the basic
GCOS model (i.e. associating a job step to each transaction) had already proven
unacceptable in GCOS III TP-II.
In most transaction systems operating under a general-purpose operating system, like GCOS
III, GCOS 8 or IBM OS, a transactional subsystem reimplementing most functions of the OS
was implemented and originally developed by sales support (e.g. Customer Information
Control System). Instead, GCOS TDS was developed by engineering and took advantage of the
basic OS and of provisions reserved for that purpose from the initial design.
The TDS
Transaction Driven System model was gather a library of application specific commands that
could be "called" by users (almost exclusively clerks dialoguing with their own
customers by telephone or at a office window). The eventual purpose of those commands was
to update databases and/optionally to deliver a printed ticket and receipt to the
customer. The database was use to retrieve information, to create new information records
and to update existing data. Frequently, in addition to the dialog with on-line customers,
other transactions or printouts could be triggered following thresholds recorded on the
database, or on timing events. The transaction commands were named TPR Transaction
Processing Routines. They were stored in binary format as shared modules in a
library.
TPR were written in a special COBOL dialect. They were preprocessed, compiled and linked
as "shared modules" type 2 segments -using an option of the static linker-. They
were processed as re-entrant modules to be executed in a thread initiated at each
"transaction". The loading time by the dynamic linker was minimal. A cache of
loaded TPRs was maintained so no additional I/O was needed for most frequent transactions.
The working area
of the transaction thread was the stack, but in addition some data segments (protected by
their read-only status or by semaphores) could be specified by the programmer. A TPR could
SEND/RECEIVE additional messages to the clerk and could accessed one or several records of
one or several databases. There was no specific restriction in data base usage and several
access methods could be used in a transaction. The TPR however should issue COMMITment
statements when it was ready to free the databases and must TERMINATE in relinquishing all
other resources not stored in the file system.
TDS uses the GCOS
mechanism in mapping its own architecture concepts on the GCOS mechanisms.
First terminals are not permanently on-line with the GCOS system. They are ignored until
they log-in.
Second, terminals are not architectural entities but are only source and destination of
messages. So, a transaction could involve one or several terminals. Even a logged-in
terminal has just been made known to the TDS subsystem. Its user may send messages.
Third, when the terminal user sends a message specify it begins a transaction,
by sending a command recognized as such by the TDS overseer, a "virtual
process" is created within the file system for that transaction.
Fourth, this "virtual process" is mapped on one of the threads of the TDS
threads pool. This mapping may be immediate if there are available frees entries in the
pool or it might be delayed.
Fifth, the mapping remains effective until the thread had to suspend itself because long
duration periods such as exchange of messages with the terminal (that impact relatively
long transmission times and longer user "think time"). In those case, the
virtual process is unmapped, its context is stored in the file system (TDS swap file)
until the terminal answer is received. A programmable time-out may cancel the transaction.
Sixth, the transaction may just read the data base (s) and the termination (normal or
caused by time-out) has no special operation to do. Alternatively it could alter the
contents of the database. Modifications of the database are journalized by copying the
concerned block before modification and storing a copy of the modified record (after
journal).
The purpose of the before journal is to be able to cancel the modification if the
transaction terminates before a COMMITment had been taken for the update. The after
journal has the purpose to reconstruct the data base if a problem (hardware failure,
system crash) require to back up the data base before restarting the processing of
transactions.
In fact, the "before-journal" was frequently replaced (at customer wish) by the
mechanism of "differed updates" where the database was not updated before the
end of the transaction. That mechanism, in liaison with the control intervals buffer pool
and a General Access Control (GAC) implemented simultaneously, was provided a data base
cache with all coherency mechanisms needed for an efficient processing of transaction.
When the Oracle server was included in TDS, this cache became distributed part in GCOS,
part in the Oracle database, and possibly also in cooperative TP.
Among events that may cause the termination of transactions is the mutual interlocking of
transactions in concomitant accesses to several records. The strategy applied in that
case is implemented in the GAC server.
The after journal was giving a way to reconstruct the database in the event of a system
malfunction. Another solution was optionally used at customer wish that consisted to keep
a log of transaction request and to replay them after a crash. A logging of messages was
often done in a TP system for arbitrating conflicts between end-users and clerks. However,
the simultaneous processing of transactions would not guarantee the same result for
on-line processing and for batched logged transactions; replaying logged transactions
messages might cause problems when end-user show guaranteed output of transactions that
are not identical to the definitive update of the database. So the journalized file system
was the more recommended solution.
Journals and the swapping file (containing transaction context) were the object of special
care against hardware failures. Header and Trailer time stamps were used to guarantee the
integrity of those files.
Dual copies of databases were introduced essentially to decrease the recovery time, in
case of media failures, and secondarily to improve the latency time of media accesses.
Dual copy did not replace the existing mechanisms of differed updates and after journals
that remained needed for a 24/7 continuous operation.
The behavior of a transaction system depends upon a right planning of the transactions
programs. Whereas, batch applications and IOF applications may accept runaway programs, it
is not the case for a normal operational transaction system. However, more protection is
offered by GCOS than in competition systems where all running transactions operate in the
same address space. All accesses to the databases and all modifications of the work files
of the subsystem are monitored by TDS procedures. The execution of a runaway program
inside a transaction is not likely to alter the integrity of the database by other
transactions and even may not be noticeable to other transaction users. For instance,
using a transaction program recursively is likely to cause a stack overflow in the private
thread space of the transaction or to be subject to a transaction time-out. The
multi-threading of the TDS is pre-emptive and a transaction program cannot monopolize a
processor.
Although, the
initial specifications called for a single TDS subsystem in a GCOS system, there was no
barrier to operate several systems with the same or different databases with the same or
different access rights in the same system.
When DSA was
introduced in the early 1980s, the following options were taken for transaction
processing. Terminals not attended by clerks were not connected to the network. Attended
terminals had an open session to the TDS server after the clerk had logged-in and was
recognized by its terminal-id and its password. The network processor had no knowledge of
the transaction concept and was totally transparent to the transaction protocols and
commitments.
When cooperative TP was considered, an issue was raised about opening "communications
sessions" for each distributed transaction, as the "connection" network
architecture would have required. The overhead penalty was high enough to justify the
establishment of permanent sessions between distributed Transaction systems and to use
them as a pool of data pipes on top of each CTP protocols would apply, realizing an
"emulation" of "connection-less" protocols. TDS did not support
directly "very long transactions" that would require that the context of the
end-user be transported to another terminal or have to be maintained over days. Specific
protocols to insure the cancellation of commitments in the database have to established at
application level (for instance to separating the concept of reservation and buying or by
programming cancellation TPR, knowing how to undo commitments). The model where the
end-user would directly perform transaction from its own PC (using cookies) had not yet
taken place in the 1980s and in the era of mobile computing, that model had itself its
limits.
In the mid 1980s,
transactions that could be distributed between different TP systems were
introduced using CTP -cooperative transaction processing- protocol closely
mapped on IBM SNA LU-6.2 (using DSA or DSA mapped SNA networks). Distributed TP between
GCOS TDS and IBM CICS was becoming a reality.
A common
characteristic of the TP model exemplified by TDS was that the transaction system kept the
state of the whole transaction on behalf of the users. This was an heritage of the era of
dumb terminals (Teletypes and display terminals without programming capability). When PCs
were substituted to those dumb terminals, it subsisted a lack of confidence to store
enterprise level important data in the memory of PCs. There were many customers that
jeopardize the integrity of their databases by moving unconsciously to the client-server
model where the state of the transactions was distributed, partly in the user workstations
partly in the server (s). The centralized state model (in conjunction with CTP protocols)
was rugged and robust, however, it presented a bottleneck when a transaction processing
application became open to millions of Internet users. It also had problems to accommodate
very long duration transactions or hierarchies of transactions.
Data Base server
While Data
Management has been handled as a service operating in the thread environment of its caller
(the TDS threads for transaction processing), the port of Oracle by Bull engineers in the
early 1980s mark a change in the architecture. Essentially, to minimize changes in Oracle
source code, Oracle port was implemented as a separate J server (a specialized process
group) receiving SQL request from other client process groups (batch, IOF or TDS).
Those
implementations were helping the introduction of a large number of processors. The Auriga
hardware architecture was characterized by sharing a L2 cache within a group of 4
processors. While that feature was masked to programmers and to the users, it has a
significant performances impact introducing a degradation due to the address space
migration between L2 caches. In the mid-1990s, GCOS systems were sold in attributing a
different price to the processors, decreasing the price for more computation incentive
applications such as Oracle and Buffer management servers, keeping them high for standard
GCOS applications.
Open 7 System
Facility (UNIX environment)
From 1975 to
1982, GCOS was THE operating system of CII-Honeywell-Bull and its responsible had a
tendency to ignore two important factors that will change the world of software: the
advent of the Personal Computer and the penetration of UNIX an operating system developed
essentially outside the industry in non-profit organizations. The new Bull management was
not at all biased in favor of CII-HB product lines and attempted to convert the companies
to the world of Open Systems. It became obvious that opening the world of GCOS software
would allow to integrate new applications at a low cost, specially those developed for a
direct interaction of the user and his (or her) program. The IOF environment required a
high porting cost for applications developers who had develop them in mini-computer
environment or other incompatible systems.
Several solutions
to offer a UNIX compatible environment were considered: a software solution and a hardware
solution where a UNIX supported processor such as Motorola 68000 would have been attached
to the GCOS system, as the DPS-6 Front-end processor has already been. The hardware
solution raises the issue of providing scalability across the whole range of DPS-7000. Its
implementation was initiated in the early 1990s on GCOS-8 systems and was adopted only in
the late 1990s on GCOS7 (Diane project).
The software solution consisting to build a UNIX environment the way emulators were
integrated inside GCOS was the first initiated. It received few attention because Bull
management wanted to orient customers towards genuine UNIX systems. Its
perception was limited to the port of UNIX typical application, the most important being
the TCP/IP stack.
In fact it was a
port of UNIX to the DPS-7000 instruction set. This was done using the GNU C compiler with
the DPS-7000 assembler, generating native code. The UNIX supervisor ported to the native
decor was linked to service function that were calls to the GCOS services providing an
access to the UNIX resources files AND to the GCOS resources. This port of UNIX was
multi-threaded using the micro-kernel support and could take advantage from the DPS-7000
multi-processor, not only to have UNIX and GCOS coexist, but even to run simultaneously
several UNIX processes.
Open 7, as the
UNIX port was called, used the services of GCOS operating system as emulators did, sharing
devices and system resources (timer, input and output). GCOS allocated to Unix a large
GCOS file that was mounted as a UNIX file system. All devices I/O were handled by GCOS.
UNIX benefited from the shared buffer pool of GCOS and did not need its own
peripheral.
But, it was able to controls its own front-end processor (a real UNIX system) through a
port on the Ethernet (or FDDI) local network and to perform TCP/IP networking on the same
hardware resources as the GCOS system. Reversibly, Open 7 implemented a TCP/IP server
for the account of GCOS7 programs.
When it was
planned -around 1995-to discontinue the manufacturing of more performing DPS-7000
processor, this software solution lost its interest and a return to the hardware solution
was re-envisioned, using the Intel platform instead of the IBM/Motorola's one. The TCP/IP
stack was moved to the native GCOS and was the base of the interconnection of the two
worlds (by RPC instead of direct calls a sit was the case in Open 7).
Finally a
DPS-7000 emulator (Diane 2) was developed on IA-32 and IA-64 hardware architectures.
Windows NT was used as a loader and a supervisor for the GCOS applications that finally
stayed with most of GCOS code on top of the most popular architecture.