Kim was brought into the program to take responsibility for the initiation and conclusion of a number of research and analysis sub-projects. Three of the sub-projects were corporate wide assessments for 1) quality factors impacting (mostly) Pipeline Integrity, 2) all applications, data sets and business processes that will be impacted by the replacement of the current GIS, and 3) the impact on business processes, data sets and applications of the introduction of dual linear measurement systems. Kim was also responsible for completion of the physical data dictionary corresponding to the first two releases of the GIS data management system. Lastly, Kim is pulling together requirements for the 1) transition of components of the Hydraulics data management system to take advantage of the new GIS map interface, 2) the development of new processes for image processing to take advantage of the latest ARCGIS servers, and 3) system information to support sustainment.
Kim was the PPDM data analyst expert for the current deployment phase of the Devon PPDM v3.8 migration project.
Kim is providing data analyst support for the end of phase I of the Single Source of Truth project. This is (mostly) troubleshooting issues with the quality of the source data content and the proper migration into the ARTS-based data vault repository.
Kim was brought in to provide PPDM expertise for the Eagle Ford business unit data warehouse that will be harmonizing and amalgamating all sources of engineering, production and project data related to the Eagle Ford development. In addition to consulting on PPDM issues, Kim also carried out detailed mappings from internal and external source systems of record into the Eagle Ford PPDM data warehouse.
This was a very short term assessment project to determine approaches to enhancing asset management at Devon (US) to support the new GHG reporting requirements. Kim was involved in assessing current state and contemplated asset management strategies.
Kim is a member of the Groundswell DA team that is responsible for providing data architecture and data analyst services for projects initiated by the DMS group at Cenovus. The projects mostly involved content moving into and/or out of the Cenovus data warehousing systems, EDW (formerly EDD – See Encana Corp., June 2005 to June 2006) and FIS. The major project was the upgrade of both EDW and FIS to support the new Wellview and Siteview versions (v9 and v4, respectively). Kim’s role was data architect and, to some extent, subject matter expert for EDW, Wellview and Siteview, and well data, in general.
This project was struck to understand and manage the changes to Husky systems and data stores arising from the Saskatchwan Government adopting the Petroleum Registry for regulatory reporting. The project had two components. The primary component related to the changes to production accounting from the new regulatory reporting requirements. Kim was involved in the second part of the project that was managing the changes to UWIs and facility codes. Kim's role, as well as that of everyone else on the project, was cross-functional in that it involved data, integration and business analysis.
Kim worked with the team on building out the primary artifact of the project - a reasonably comprehensive map of all of the systems impacted by the change and how information containing UWIs or facility identifiers moved between all of these systems. The scope was limited, of course, to systems and interfaces that were directly impacted by these changes. In addtion to building out the map, Kim was instrumental in building out an understanding of the relationships of various Husky and other identifiers in relation to the government UWI and how the conversions need to be approached to minimize data corruption.
Kim was also the lead on the program and database changes related to handling of facility identifiers within Husky systems. With the commingling of Alberta and Saskatchewan facility information within the Petroleum Registry, and the move to all numeric identifiers, all systems that relied on uniqueness of codes across jursidictions needed to be changed and tested.
Release II of the common data access operational data store comprised trouble shooting and warranty work related to the Release I implementation, with additional fine tuning of the data management algorithms, processes and procedures. The algorithms used for the Estimated Gross Margin component of the installation were calibrated and additional production and pricing information was incorporated into the calculations. Kim worked with the Transalta team to build out the master and reference data management that support the program.
Release 1 of the common data access operational data store included content for generation assets master data, production volumes, and financial transactions related to billings and settlements. The focus for information delivery in this release of the project was the content and methods necessary to support calculation, reporting and comparisons of estimated gross margin.
Kim's role, in conjunction with the Transalta architecture team, was to build out the data architecture, data models, and data management processes for the subject areas comprising the operational data store. The architecture and models needed to take into consideration the peculiarities of working within the constraints of SAP/BW to build out a (traditional) operational data store for data harmonization and delivery.
Kim was also responsible for architecting and modeling the solution required to build out the algorithms and processes used to defined and calculate estimated gross margin. In addition to the models, Kim worked with the Translata team on the master and reference data management approaches and processes.
This project had it's genesis as an assessment and evaluation for a monthly reporting and executive dashboard initiative, with support for a managed, master data repository. The initiative evolved to focus more on managed master data and data readiness for common data access, that would be capable of supporting and effeciantly building out required business management functionality.
The scope of the initiative included all operational data for production, availability, maintenance, facilities and human resources. In addition to assessing data readiness in most of these operational subject areas, and for almost all of the Transalta facilities, the team also worked with Transalta to assess and recommend the necessary technology stack and data governance model to support the initiatives.
Following the assessment, the team worked with the client to create a phased program and project plan to build out a common data access platform. The center piece of this common data access is an operational data store to harmonize content across the business, applications and data types, and an information delivery layer to deliver business content. The common data access is intended to be the seed for an eventual managed, master data management program.
The project was a Strategy and Architecture Assessment of Nexen's Information Management Improvement Intitiative. Kim provided data architecture services as part of the Noah Consulting project team. In this role he was to investigate the existing data architectures and to make recommendations on the "to be" state for Nexen's GG&E systems globally. Included in the analysis were authoritative sources, master repositories, data warehouses, document repositories, data governance and details related to data types, data integration and data management processes.
Kim worked with the geoLOGIC PPDM expert to upgrade and integrate an original PPDM v2.3 database for a geoLOGIC client. The existing PPDM v2.3 operational database was converted to PPDM v3.8 and integrated with content from an IHS PPDM v3.7 datastore. To provide continuing support for a number of existing applications, PPDM v2.3 updatable views were created to layer over the integrated content.
Kim was brought late into the project to provide expertise related to the building of land data repositories in PPDM. This project is a sub-component of a larger project related to the upgrade of the CS Explorer land management system. The mapping completed when Kim was brought in was, unfortunately, off-track and was out-of-step with the rationale for building a PPDM land repository. Kim restructured around the central tenet of a land system — the lease — and started in on building the infrastructure and processes necessary to load content into a PPDM v3.8 database.
The integration approach is based on staging content as it changes in the source system, transporting new, changed and deleted content using SAP XI and staging the records in the PPDM repository. Separate processes are constructed for intepreting the content as it arrives. PL/SQL is the underlying technology for carrying out the data loads.
At the end of this first phase of the project all of the infrastructure for processing reference entities, converting units of measure, and handling versioning and audit history will be in place to support all future development. The first content to be loaded will be the essential mineral lease or mineral title header information. This is a pure PPDM v3.8 implementation — no extensions were used. The PPDM v3.8 meta model was used extensively for translations and conversions, and for managing source and target definitions.
Kim was the lead architect in the assessment of how to transition Husky's in-house integrated reporting and workflow management system from Maxwell to Wellview. Kim worked with the development team through most of the conversion to refine the transition strategy. He also took a hands-on role in converting views to support reserves planning for Heavy Oil operations.
Kim was the lead responsible for the migration and conversion of the bulk of the drilling and completions content from Maxwell into Wellview, and for the lease construction information into Siteview. This included working with a team of business analysts, SMEs and vendor representatives to identify, map and interpret the Maxwell content into Wellview and Siteview, and the preparation of models and designs for enabling the transition. Kim was also responsible for creating the loading process, scripts and procedures for carrying out the data migrations. The processes related to the lease construction content were especially intricate because of the presence of existing content from HS&E.
Kim was recruited to provide expertise on data migration and data quality issues related to the transition of the Lima Refinery operations from Valero Energy to Husky. A significant component of the overall transition related to recovery and integration of unstructured content into the Husky LiveLink data management system.
Except for the CORPWELL and eWATCH project, this contract was mostly an operational role with respect to the Encana Data Depot (EDD) system. Operational support encompassed a number of relatively minor projects related to integrating new content and delivering data to a several client systems; as well as the routine troubleshooting and enhancement of the system.
Kim was seconded from the Solutions Delivery Architecture (and EDD) team(s) to work with the eWATCH project on the CORPorate WELL repository component. A somewhat limited corporate well repository was proposed to the project team as a first step towards support for Master Data Management for well information at Encana; that would support the requirements of the eWATCH system and would lay the foundation for the proposed electronic well file.
The project requirements were, firstly, to access well information from the regulatory well system at Encana (the Complete Well Summary system), linked to entitlements and document content in the eWATCH system, and ultimately integrated, in one form or another, with well information from the Requisition to Drill and WellView drilling and completions systems. Kim proposed a (pure) PPDM data model as the framework for the CORPWELL system.
Kim was the lead on putting together the architecture and process model for managing and moving content from the CWS system into the PPDM data model. He also developed the prototype that proved out the approach, and especially that all content could be accomodated within the PPDM data model. After final acceptance of the approach, Kim was responsible for preparing the detailed mappings and process models for moving and transforming content, and also took on a signficant component of the development. Some of the trickier parts of the transformation into the PPDM model related to licensing and commingling.
The CORPWELL component for eWATCH was substantially complete at the end of June, 2005.
Kim was asked to join the Enterprise Solutions Delivery Architecture team as part of the Information and Data Architecture group. In addition to working with the team on the develoment of principles, processes and governance, he was also responsible for setting up the framework and templates for system wide and application/database specific context models and diagrams.
As part of the Architecture team, Kim was the lead on the architecture for a well cross-reference engine. The well cross-reference engine would form a key piece in the integration (physically or otherwise) of well information management at Encana, which, in turn, was a significant component of the Master Data Management strategy. In addtion to the Architecture, Kim led a small team to prepare a data and process model to support the architecture, and built and demonstrated a prototype implementation.
Kim was seconded to the Encana Data Depot (EDD) team as the EDD project was getting underway. The Encana Data Depot was an Enterprise data integration and data warehousing initiative. Kim played a signficant role in developing the architecture, and in the design and implemetation of signficant components of the overall system; specifically related to heirarchy management, committed costs and field estimates (from WellView) and production volumes and related content (from PVR).
Kim was asked to join the Foothills Division IT group withing Encana to work on the Business Information Access (BIA) initiatives to deliver quality data into the hands of the direct business process owners. In addition to consulting on architecture, integration and Software Engineering (in general), there were three main projects that Kim participated in — spend reporting (a data warehouse by another name), health and safety compliancy reporting, and a program for spend analysis related to First Nations (mostly) and other communities in BC.
The spend reporting project was significant in that it was the first systemic effort at Encana to bring together budgeted, committed and actual costs, aligned to operational well information, and deliver it in a meaningful way to end users. This project was the prototype for several other follow on projects at Encana, and it provided valuable lessons-learned for the Encana Data Depot project.
For this short period, Kim was part of a team that was charged with the development of suitable Enterprise architectures and architectural frameworks, mostly around application and data integration. As part of this work, Kim was responsible for conducting analysis and preparing proposed models and designs for better integration of the existing Requisition to Drill (RTD) and Complete (RTC) systems. Separately, as part of a transitional involvement with the WellView support group, Kim provided some maintenance and support for WellView data cleanup.
Kim joined the WellView implementation team following the selection of WellView as the drilling and completion system at Encana. His primary responsibility was for WellView integration. The overall goals and models for integration were formalized and approved at the end of the preceding evaluation and pilot phases.
Kim's roles and responsibilities in support of, and in addition to his primary role as Team Lead for Integration were:
Agile SCRUM was the management approach chosen for the integration efforts. This methodology was considered most appropriate for this project because of it's success in the proceeding phases, and because SCRUM is most suited to projects like this, where both requirements and technology have significant unknowns. Using this approach, the goal was to provide tangible functionality to the user community on a regular basis. The project was very successful in this regard. Starting in September with the first production release, new and enhanced functionality has been promoted into production every Sprint.
The first three Sprints were consumed by requirements and design refinement, and building of the technology infrastructure. The development team had to overcome some significant technological hurdles related to TIBCO and to the two systems being integrated — WellView and the Corporate Well Summary (CWS) system. Neither of these systems were built with this level of integration in mind.
Subsequent Sprints saw significant functionality promoted into production with some regularity — the process for seeding newly licensed wells, AFE import and (basic) well updates. The final components for support for AFE supplementals and the (harder parts of) well updates of boreholes were implemented in the final Sprints.. The components necessary to publish operational information out of WellView, primarily for the purposes of regulatory compliance and reporting, were taken out of scope.
Kim was part of the team that was charged with conducting the evaluation of drilling and completions systems on behalf of most of the business units in the Onshore North America division. The project charter states that the project is "... a post merger activity with the intent to review the existing toolset for capture and reporting of Drilling and Completions operations data and recommend a more efficient data capture and management solution."
Four products were under rigourous, preliminary evaluation. This evaluation was based on testing of approximately 200 requirements supporting about 30 components of the defined functional information model, also known as the decision model. Representatives of the business units, as well as members of the core project team were involved in the testing. The end result was an objective scoring of each of the products against the known requirements.
Kim was responsible to eliciting information and feedback from the business leads assigned to the project team, and for gathering and compiling all available information on workflows and business processes as it impacts drilling and completions operations. Based on this analysis, the functional information model was re-cast into packages of typical system interactions that became the basis for user evaluation and scoring. The system interactions were documented in the form of a use case model and cross-referenced to the pertinent numbered requirements. Kim was also responsible for compiling the evaluation results and rolling the results back into the functional model for assessment by the steering committee.
A significant component of the evaluation was the ability of the products to integrate or interface with other information systems within Encana. The interface requirements that the vendors initially responded to did not allow for objective evaluation and comparison between vendors. Kim restructured these requirements into a more abstract form that would allow for a fairer and equitable assessment of each vendor's capability in this regard. The abstraction was based, for the most part, on support for the Encana definition of a well, as well as support for triggering on key events of interest and "publishing" of data pertinent to that event.
The WellView product was selected for the pilot phase of the project based on the results of the evaluation. During this phase, Kim looked into the details of integration of the product with the overall well asset management business processes. A significant aspect is the use of an EAI framework, based on TIBCO, as the supporting technology for integration.
The project was conducted under a variation of the SCRUM Agile methodology. Again, from the charter, the work is "... time boxed into "sprints" in which the steering committee, project quality advisor and the project team define the "product backlog" with the goal of the sprint to demonstrate business value." Each sprint is approximately 30 days in duration and is characterized by clear objectives and decision points.
Kim provided routine maintenance & support, and hand-holding during this implementation phase of the project, as the day-to-day operations were being transitioned to the iDc data management group. This took about two weeks until the procedures were reasonably stabilized. This work was undertaken as part of the original obligation for all of the previously developed systems.
Unfortunately, in November the server hosting the operational database suffered an irrecoverable crash and everything had to be reconstructed from previous backups and off-site copies of the original source. A decision was made to also carry out the planned migration to Solaris at the same time. All of the in-process programs, including the Java components, were re-installed without any problems. The problem areas were the Perl libraries and modules, especially the conversion from ODBC to Oracle, and the script conversion from NT to Unix. Version differences, installation problems and deprecated syntax stretched the conversion out considerably. The data loading group was responsible for carrying out the (reasonably) straightforward script conversion.
This penultimate phase of the Client's Land Database was also undertaken on a turn-key basis (see previous phase, below). The component data sets for this phase would fill out the remainder of the Client's Western Canada land offerings, as follows:
The data loading paradigm that was employed in the previous phase was evolved further. The abstract object model for mineral leases and units was much more complex than the one that was developed for Freehold land titles. The added complexity arises primarily from the fact that there are many more attributes and relationships to contend with, and, significantly, the abstract model had to accommodate differing business models in each of the jurisdictions.
A complicating factor in the smooth progression of the project was the decision to move to Oracle's in-process Java for the loader components. The decision was based on the poor performance of Oracle Objects, used in Phase II, and on its structural peculiarities that resulted in added code complexity. Ultimately, the decision has proven to be the correct one, but the initial learning curve, coupled with the model complexity, delayed delivery of the some early components. In the final stages of development, however, this approach made it possible to easily and quickly incorporate model changes and extensions, and to add in additional data sets.
About mid-project, the Client acquired another data management company that had an existing land information database. Rather than pursuing independent development of parsers and loaders for all of the data sets remaining to be completed at that time, it was determined that the most expedient approach would be to use the acquired database as the primary source of data. The core loading technology would remain the same, but rather than building additional parsers an extractor was designed to pull the data out of the other database directly. This decision made it possible to meet deadline and budget commitments.
The project ultimately finished on time, albeit with several major course corrections along the way. The Perl parser development, the PL/SQL extractor, the bulk of the data loading operations, and other selected work items were sub-contracted to 654708 Alberta Inc. Kim was responsible for the modeling, design and architecture, and development of all of the Java classes. He was also responsible for all of the DBA activities and carried out almost all of the testing and the final implementations.
This phase of the Client's Land Database project was undertaken on a turn-key basis. The contract comprised the modeling, design and development of software to process and load Alberta Freehold mineral rights from Land Title documents, and to carry out the initial load and verification of the source data acquired by the Client. The contract also included components for maintenance and support of software developed in the previous phase, and for the day-to-day data loading operations.
The approach that was taken was less procedural and, perhaps, less expedient than would normally be the case in solving a data loading problem. One of the intentions was to build a logical and a concrete framework that could form the base for future data loading efforts. The framework comprised an object model of freehold land title information abstracted from Land Title documents, and a simple, transient storage and retrieval mechanism to hold object instances, parsed from source documents, as linked class-attribute-value tuples. Furthermore, a standard process was used to manage the transformation from source to final data structures, as follows:
The Perl parser development, and selected portions of the other work items was sub-contracted to 654708 Alberta Inc. Kim was responsible for the modeling, design and overall architecture, and completed all of the Oracle Objects development.
Kim was originally brought on to advise on process improvement and on issues related to data management. He was later asked to take over responsibility for the Land Database project. The intent of this project was to build the infrastructure necessary to support the acquisition, management, sale and distribution of publicly available information on land of interest to the oil industry. This would include information on crown agreements, freehold titles and encumbrances, federal lands, calculations of open crown rights, and land sales.
The principal features of this system are integration of land information from all jurisdictions under a common PPDM v3.5 land data model. The database supports a 3D rights representation and integration of all reference entities. Further, the system is based on an all inclusion, single fluid model instead of the "traditional" combined fluid, exceptions model.
Kim was responsible for completing the modeling of the transformation of the source data into the PPDM v3.5 model, in addition to designing an overall data management architecture and data loading framework. He was also responsible for the day-to-day project management for the initial phases of the project. At the peak, three programmers were employed on development of the parsing and loading software. Only parsing and loading of the Alberta crown agreements and land sales was completed in this phase of the project.
There are two objectives to this project. In the short term, the client requirement is for a single data store that will house information on accounts payable invoices and directly related data. The sources of data for this database are a document capture system and an Accounts Payable Workflow Management system, both components of the Electronic Document Management System, and, of course, the accounting system, proper. The information is to be gathered, merged and structured for easy access by simple, end-user tools. A data warehousing approach and architecture are being used for this data store.
The longer term objective is to use the Accounts Payable Data Store as a pilot project, prototype and test bed for embarking on a more comprehensive Corporate Query and Reporting program. The lessons learned in building this initial data store will provide guidance for future projects and will give the client a basis for searching out and acquiring data warehouse, and query and reporting suites of tools.
The project was proceeding in accordance with all reasonable expectations for a project of this nature, and was exceeding all of the stated, albeit somewhat vague, requirements. However, the project was eventually terminated on my recommendation because the final, projected costs far exceeded the perceived business benefits. An alternate plan was proposed to simply provide an accessible "data dump" of both sources of invoice information, that would be updated on a daily basis. At face value, this solution provided the users with access to the data at a minimal, incremental cost. This is an interim solution until a longer term corporate data warehousing strategy, and attendant commitment, is in place.
As a final phase of the original project, all of the documentation produced for analysis, design and modeling, was cataloged. A report was produced that captured the "findings" from the endeavour. These findings were primarily about the issues that arose during design with respect to data quality and the rationale for the design decisions that were taken to accommodate integration and transformation of the raw sources. All documentation for the original project, plus the final findings report, were produced and delivered as hyperlinked, web browser accessible pages in a shared directory (not on a web server, unfortunately).
Kim was the analyst and lead designer for a system that was required to provide the new, mainframe based, short term shipment trip planning system with detailed train schedules and associated shipment blocking information. The current system for maintaining train profiles and for building operating train schedules remains in tact. These systems are a combination of an Oracle database with a Delphi front end, a mainframe based system for generating train plans and an Oracle subject area database used as the schedule repository.
The shipment blocking information is sourced from a long range shipment and train planning system that is supported by Paradox with a Delphi user interface. The long-range plans produced from this system are used as the source of the train profiles that ultimately become operating train schedules. The profiles are built manually from schedules produced by this system.
The objective is to match trains between the two systems and build merged routes that incorporate the blocking information that is only sourced from the long range planning system. The train matching is complicated by legitimate differences in train names, origin and destination, running days and effective/expiry dates. The merging of the two routes is complicated by the fact that the systems are based on significantly different track networks.
The system to load, match and merge the two train schedules was implemented in a series of Oracle PL/SQL packages. The primary control objects were either Unix shell scripts or "C" executables. The user interface components that are required to view the merged routes and all customer pull/place events was developed in Delphi. Wherever possible, new components adhere to a strict N-Tier architecture using MQueue for data server communication.
The Corporate Well System for this (primarily) A&D oil company is the repository and administration tool for managing all of Talisman's wells. The system is designed to maintain information on this key corporate asset and to integrate data pertaining to wells from accounting, land, reserves and the public data sources. The technologies used, the system architecture and the supported business processes are a very significant advance on an existing VAX/Sybase system that is slated for retirement on completion of this project.
The architecture of the system is traditional client/server, except that all support for business rules and processes are managed by back-end Oracle objects. The front-end application supports basic query, edit and reporting capabilities only. Burying the business rules directly in Oracle objects presented some interesting challenges, but resulted in a system that can be integrated with other corporate applications while ensuring the integrity of the business processes.
Kim was responsible for:
Kim assisted a software development company to convert stored procedures supporting business processes in their commercial product from SQL Server to Oracle PL/SQL. First attempts looked at automated conversion using existing tools and techniques but the work eventually fell back to a more reliable manual conversion procedure.
Kim worked with a small engineering and technology company on the design and implementation of a relational data model for managing oilfield equipment. The model is based on the Composite design pattern and on the Object-Property-Event model used in the POSC Epicentre data model. The data model holds both actual equipment information as well as data on the meta-model that describes what a piece of equipment is and how it relates to other pieces of equipment. The implementation is reasonably robust and should be able to handle any and all types of equipment and their properties.
He was also responsible for rationalizing this company's database administration environment. This involved the analysis of current objectives and a rebuilding of their data management and data loading systems in an attempt to move towards a "lights-out" operation. A component of the work was the evaluation of the new PPDM 3.4 data model and the merging of proprietary information and data model extensions with public data held in the new data model format.
Kim was part of a team of analysts that was tasked with the re-evaluation of the project to replace the primary waybilling system, for a major Canadian railway company, with state-of-the-art client/server technology. He was one of the principal analysts on the project team investigating and designing the interfaces between the new, client/server waybilling and order entry system and numerous legacy, mainframe systems.
He designed and developed a system to process raw field well log data and digitized well log data into a PPDM/POSC database. The system provides sufficient functionality to handle both proprietary and public information. The work also included the specification and management of the outsourced development of a key conversion component. The system is based on the convert-load-verify-post paradigm. The verification component is PC based and the remaining components run under UNIX. Kim also provided partial project management services for this project.
Kim designed and developed base library functionality for processing Geoshare exchanges containing logging information, and the model and software for mapping the Geoshare objects to the appropriate PPDM/POSC database objects. The work included the development of machine readable specifications and parsers for generating a significant portion of the system components used by several applications. Kim consulted on questions of requirements and design relevant to the database and processing models used in the system. The work was undertaken with C++ and Unix.
Kim was responsible for the on-going design, development, delivery and maintenance of ATS supported data models, data management methodologies and data loading technologies. He was involved in the research and development of technologies to support the emerging industry standard data models that are sponsored by the Petrotechnical Open Software Corporation ( POSC) and the Public Petroleum Data Model ( PPDM) Association. Kim worked with management and the development staff on the design and refinement of software engineering policies, procedures and tools to support the on-going software creation process within ATS. He also carried out requirements analysis and the design of processes and procedures for data management within a corporate database environment.
He designed and developed data loading methodologies and systems for updating PPDM databases with information provided by regulatory agencies and oil scouting organizations. Components of these systems involved the application of artificial intelligence and fuzzy logic to ensure that the information conforms to the rules enforced by the data vendor. The systems that were developed also incorporated a user definable, rules driven report generation component. The work was based on an Oracle relational database and made extensive use of 'C' language applications and Unix tools and utilities.
Kim provided project management and overall design co-ordination for the development of a Microbiology Laboratory Database System for a major Alberta hospital. The system uses the Ingres relational database and was developed almost entirely within the Ingres 4GL environment. This development was eventually absorbed and completed by another company. Io Software Consultants was only involved in the requirements analysis and preliminary development.
He developed the conceptual design and provided overall design co-ordination and project management for a Flight Following System for use at Arctic flight facilities. This is a Windows application and was developed with the Actor object oriented programming language.
Managed a project team of six programmers and users for the design and development of an Oil and Gas Operations Computerized Monitoring System for the Alberta operations of a multi-national oil company. The system consisted of a centralized Ingres database on a VAX cluster and look-a-like dBase applications distributed on field based PC's. This system may be considered a pre-cursor to some current industry field data capture applications.
Kim created the conceptual framework and was instrumental in the design and development of a data management system that is used to move production and operating data from a process control system to a mainframe database. The work was carried out for the heavy oil operations of a major international oil company. A significant part of the application was the vetting and validation of the raw data as it was transferred automatically from the real time system. The project used the Oracle relational database on an OS/2 server connected to a Novell LAN. The system was designed to run without human intervention and made extensive use of multi-tasking and remote procedure calls.
He designed and developed numerous databases and associated applications, including customized graphical interfaces, for Heavy Oil Operations and Production, Production Chemistry, Forecast of Crude Oil Deliveries, Major Capital Project Cost Control and AFE Tracking and Control for major Canadian and multi-national oil companies.