Blog do projektu Open Source JavaHotel

wtorek, 22 stycznia 2013

Just another MVC framework

Introduction
I came to the conclusion that creating Web interface (even if one uses GWT) is complicated and dotted with problems so I decided to create next MVC framework utilizing what I have done so far. There are plenty of MVC frameworks so why create next one instead of using existing already.
I do not know but I really like the idea of having separated UI and business rule, Web/AJAX approach allowing interaction to every user action (not only submit/response) and rich programming language to create business rule with full access to all background environment (datastore). "Programming language" on one hand simple enough without all Java/C++ bargains like interfaces/headers/declarations but on the other hand rich enough to do what you want, "hot deployment" during development, modify source code and framework runs it immediately. Cutting long story short - I mean Python/Jython. And also do not bother with all details related Web/Ajax programming especially asynchronous requests and responses. And also ready to run on any Web container and interacting with any database.
Source code for the first version of framework is available here (Cache, GwtUI, JythonUI, Test). Source code for the first sample application is available here. Sample CRUD application is deployed to Google App Engine infrastructure. Also Tomcat (not Google App Engine) application has been created. How to assemble Tomcat (and Junit test) Eclipse project is described in this document.
General idea
  • User interfaces are described in XML document. Current xsd schema file is available here. Sample xml user interfaces definition file for CRUD application is available here. Framework provides default layout but it is planned to add also the possibility to specify HTMLPanel for more rich layout. 
  • There are four user action for CRUD dialog : read all records, add new, remove and modify. There is also special "before" action for the form initialization. For every user action a Jython script is called on the server side. Sample Jython script is available here. The current field values (Web form)  are available for Jython script as map/dictionary (pairs: "key"->"value", "field name" -> "field value"). Jython script should recognize user action ("action" parameter in "dialogaction" method), read current field values and perform server action. For "add" action add new record to datastore. It is also possible to return error message (for instance key is duplicated). Results are also stored as Jython dictionary.
  • Server side of the framework translates again Jython dictionary to Java Map and sends this map back to the client.
  • Client reads the answer and performs next actions. For instance displays error message in case of error, refresh list in case of success, navigate to next dialog etc.
  • For the time being very limited subset of actions and user interface widgets are implemented but it will be enhanced gradually.
  • Developing client application with this framework means adding new XML documents and developing business logic with Jython scripts. Database access can be achieved via Jython directly or (like in the sample application) by creating Java DAO (Data Access Objects). Jython can interact with Java smoothly so any method is feasible.
Framework structure, source code
  • GwtUI. There are three packages: client code, server code and shared (between client and server) code. It is a framework interacting directly with GWT. Is is based on publish/subscribe pattern I found well fitted for asynchronous nature of Ajax applications. It adds a logic for displaying and executing forms, lists, menus, CRUD lists etc. Firstly I wanted to develop client application by using this framework directly but after some time and developing a lot of code I realized that it was too complicated.
  • JythonUI. Consists of three packages: client code, server code and shared code. Is based on GwtUI (does not interact with GWT directly) and provides interaction between XML documents and Jython user actions.
  • Cache : Support for GAE MemCache. Allows using JCache in application. For Tomcat application there is a simple Map implementation
  • Test : Meaningful implementation is only for Google App Engine JUnit tests. Allows creating the same JUnit test suite for Google App Engine datastore and JPA implementation.
Third party tools
  • Jython. Interpreted language for developing server side business logic. During development Jython code can be modified directly and being effective immediately without any recompiling, one can call it hot reloading. For production Jython can be translated to byte code and executed directly without recompiling. JythonUI server code translates Java Map to Jython dictionary and vice versa.
  • Guice. Although now it ties together only several modules (sample AbstractModule) I found it very useful. For every four applications (WebGooleAppEngine, WebTomcat, JunitTestGoogleAppEngine and JUnitJPA) I created separated AbstractModule and this way I was able to put together all pieces. For instance: provide JPA implementation for datastore in case of JPA application and Google App Engine implementation of the same datastore for Google App Engine application.
  • GIN (GWT INjection) : Code injection framework for GWT. Used intensively in GwtUI.
Third party tools used for Sample application
  • Objectify : I decided to use Objectify instead of JPA/JDO Google App Engine implementation. I fully agree with a simple statement found here:  " The GAE datastore is not an RDBMS" and found that Objectify is a good answer to that issue. The general idea behind JPA/JDO is that we put simple interface on more complex RBDM/JDBC concept. But in case of Google App Engine it is just opposite : JPA/JDO is something more complicated put on something simple: NoSQL key/value database. 
  • EclipseLink : Outside Google App Engine world I still think that traditional RDBS is a good choice and JPA layer gives a lot of advantages comparing to bare JDBC.
  • Apache Derby : Good choice in case of testing or developing. Does not require installing or tuning, just put derby.jar everywhere on java class path, create URL, connect and use it.
Future development
It is only the first stage, "proof of concept". It looks promising I created fully tested two (simple) web applications for Google App Engine and Tomcat (JPA/RDBS). Future development means enabling more GWT widgets and user actions. I'm also planning to extend business logic programming by adding more JVM bases languages, Java and also add bridge to CPython (by means of JNI).
Problems
The most important problem is running Jython on Google App Engine. Because Google App Engine can create a new JVM at any time  so also Jython should be initialized. It means very long delays unexpectedly (10-15 sec) which makes all solution almost useless. For Tomcat based application it is only a delay once at the beginning of the session and then it runs smoothly all the time. I will try to understand what is the reason for this delay, may be deployment of already compiled Jython packages will resolve the problem.  

niedziela, 20 stycznia 2013

Byliśmy na koncercie

Byliśmy na koncercie w Filharmonii Narodowej 12 stycznia 2013 roku. Koncert wypełniła muzyka współczesna, Lutosławkiego i Oliviera Messiaena. Program zdominował  Des canyons aux étoiles na fortepian, róg, ksylorimbę, dzwonki i orkiestrę Messiaena więc trudno coś powiedzieć o koncercie z cyklu "Łańcuch" Lutosławskiego, gdyż wrażenie mi się zatarło.
Nie wszystkim Messiaen przypadł do gustu, część publiczności wychodziła w trakcie. Ale ci którzy wytrwali do końca na pewno nie żałowali. Można było wysłuchać instrumentów niezbyt często spotykanych w klasycznym repertuarze jak cymbały, ksylorimba (wersja ksylofonu), dzwony rurowe, rozbudowana perkusja. Kontrabas występował w roli instrumentu perkusyjnego. Solowe partie były wykonywane przez róg i fortepian, aczkolwiek słuchacze którym fortepian kojarzy się głównie z Chopinem mogli doznać szoku. Dużą rolę odgrywała także aparatura do wytwarzania dźwięku wiatru. Sam koncert składa się z 12 części inspirowanych wrażeniami Messiaena w wizyty w stanie Utah w USA w 1972 roku a zwłaszcza słynnego z niezwykłych widoków Parku Narodowego Bryce Canyon. Aczkolwiek tematycznie zbliżone i stylistycznie podobne każda część jest odmienna i można było z zaskoczeniem słuchać tradycyjnych instrumentów jak skrzypce, wiolonczela czy instrumenty dęte tworzących zupełnie inną muzyką od tradycyjnego repertuaru. Mi najbardziej utkwił w pamięci fortepian w partii solowej w czwartej części: Le cossyphe d'Heuglin ("The white-browed robin-chat").
Bardziej dokładny opis poszczególnych części i ich treści można znaleźć tutaj.

wtorek, 8 stycznia 2013

Google App Engine and Jython

Introduction
I'm planning to use Jython with Google App Engine for Java. My purpose is to use Jython as a scripting language for Java based solution, not to use Jython for server code development. I was interested to check if I can launch Jython (Python) packages at the server side. It was successful but with several drawbacks. The source code (Java and simple Jython package) is uploaded. The application (simple extension to default Eclipse plugin application) is available here.
Problem, strange behaviour in Development Mode
While running the application in Development Mode the following exception was thrown

Caused by: java.lang.NoClassDefFoundError: java.io.FileOutputStream is a restricted class. Please see the Google  App Engine developer's guide for more details.
 at com.google.appengine.tools.development.agent.runtime.Runtime.reject(Runtime.java:51)
 at org.python.core.io.StreamIO.getOutputFileDescriptor(StreamIO.java:205)
 at org.python.core.io.StreamIO.getOutputFileDescriptor(StreamIO.java:212)
 at org.python.core.io.StreamIO.getOutputFileDescriptor(StreamIO.java:212)
I found that the culprit is the following method in PySystemState.java
 private void initEncoding() {
        String encoding = registry.getProperty(PYTHON_CONSOLE_ENCODING);
        if (encoding == null) {
            return;
        }
        for (PyFile stdStream : new PyFile[] {(PyFile)this.stdin, (PyFile)this.stdout,
                                              (PyFile)this.stderr}) {
            if (stdStream.isatty()) {
                stdStream.encoding = encoding;
            }
        }
    }
The java.io.FileOutputStream is not on the Google App Engine "white list" but the same code is running after deploying to Google App Engine environment. The solution is to remove file.encoding property (although this property is set to not null value in Google App Engine).
  System.getProperties().remove("file.encoding");
Not blocking exception 
The following exception is also thrown
Caused by: java.security.AccessControlException: access denied (java.lang.RuntimePermission modifyThreadGroup)
 at java.security.AccessControlContext.checkPermission(AccessControlContext.java:374)
 at java.security.AccessController.checkPermission(AccessController.java:546)
 at java.lang.SecurityManager.checkPermission(SecurityManager.java:532)
 at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkPermission(DevAppServerFactory.java:289)
 at com.google.appengine.tools.development.DevAppServerFactory$CustomSecurityManager.checkAccess(DevAppServerFactory.java:314)
 at java.lang.ThreadGroup.checkAccess(ThreadGroup.java:299)
 at java.lang.Thread.init(Thread.java:336)
 at java.lang.Thread.(Thread.java:462)
It seems to be related to the fact that threads are not allowed in Google App Engine. But this exceptions seems to be not blocking.
Performance
    static String getrVal() {
        putMessage("Moment 1");
        String encoding = System.getProperty("file.encoding");
        putMessage("Encoding:" + encoding);
        System.getProperties().remove("file.encoding");
        PythonInterpreter interp = new PythonInterpreter();
        URL ur = JythonMeth.class.getClassLoader().getResource("resource");
        String sRe = ur.getFile();

        interp.exec("import sys");
        interp.exec("print sys.path");
        interp.exec("sys.path.append('" + sRe + "')");

        interp.exec("import sys");
        interp.exec("print sys.path");

        putMessage("Moment 2");
        interp.exec("import mypack");
        putMessage("Moment 3");

        interp.exec("from mypack import myprint");
        putMessage("Moment 4");
        interp.exec("myprint.myprint()");
        putMessage("Moment 5");
        interp.exec("GG = myprint.getVal()");
        PyObject sy = interp.get("GG");
        PyString u = (PyString) sy;
        String ss = u.getString();
        putMessage(ss);

        Map<object pyobject="pyobject"> m = new HashMap<object pyobject="pyobject">();
        PyObject keyS = new PyString("value1");
        m.put("key1", keyS);
        keyS = new PyString("value2");
        m.put("key2", keyS);
        keyS = new PyString("value3");
        m.put("key3", keyS);
        PyStringMap pyMap = new PyStringMap(m);
        interp.set("GG", pyMap);
        interp.exec("myprint.myprintMap(GG)");

        return ss;
    }
The performance after deploying to Google App Engine is not satisfactory. But it depends on the machine type set as a host. This time table is related only when server is running the code for the first time, the second and the next are executed in no time. But in case of Google App Engine a request can be redirected to a fresh machine any time, so this problem really matters.
Machine Moment 1-2 (sec) Moment 3-4 (sec)
F1 11 6
F2 7 6
F4 5 3
Desktop 2 4
F1, F2 and F4 are described here. Moment 3-4 is related to the compiling of the junit package and can be improved by precompiling the jython package before deploying. Moment 1-2 is the time consumed by the initialization of the Jython. I do not understand why there is a such difference between local desktop and Google App Engine environment and do not see any solution to that problem.

piątek, 4 stycznia 2013

Stock management, simple database solution


Motivation
I decided to create a simple database project related to “stock management”. I mean basic warehouse activities: stock (items) delivery, reservation and release (stock in/stock out). The solution controls amount and value. By “database project” I mean creating database schema, server code (by means of stored procedure) executing basic functionality and unit tests.
I'm also interested if it were possible to create a server code ready to run on DB2 database and SqlAnywhere (also MySql) because the SQL dialect implemented there are very close to each other.

Source code and test code.

Stock management

General idea

  • All stock movement is described in terms of “operations”. Operation(document) consists of an  operation header and a list of lines describing an activity related to a single item.
  • All documents are immutable. After applying a document the only to way to modify it (even to make a correction) is to submit the next document related to the previous one having a desired correction (plus/minus).
  • The solution controls an amount of items, also using fractions (four digits after decimal point). So we can deliver 10 kg. of something and then release 0.5 kg, 0.7 kg, 1.6 kg. etc.
  • The solution controls also a value of the items. I'm using a term “value”, not a “price”. By “value” I mean only a value for an accounting purpose. It is usually related to the purchase cost. It has nothing in common with a selling price, it is completely different story. We can purchase an item at price 5$  and the sell it with 10% profit (5.5$), 100% profit (10$), 50% discount (2.5$) or sell it for a song. The purpose is only to have an accounting measure – what is the value of our current warehouse content.
  • The document lines not related to any other document line is an item delivery.  It also marks the initial item value. All other operations related to this item (for instance an item release) should contain a pointer to this first line. It is also possible to have a chain of pointers. For instance: 1) “item delivery” - 2) “item release” (points 1) - “correction of item release” (points to 2). The same item can be delivered many times.
  • The current content of the warehouse is the item (items) deliveries and a chain of operations related to this delivery. All operations describe change. For instance: 1) deliver 10kg of an item 2) release -2kg of item 3) correct -1kg to this release (one kg more). So the current amount of this item is 7kg (10 -2 -1).
  • The solution also manages the item reservation. “Reservation” means that some amount of an item (item delivery) is included in the warehouse value but is not available to release. Of course – this reservation cannot last forever to avoid item zombies. Reservation can be revoked or transformed to an item release. But the logic for controlling the item reservation life cycle (for instance automatically revoking item reservation if a period time elapses) is not developed here, should be the subject of an additional solution.
  • The solution also covers item value revaluation (devaluation) to have warehouse value more realistic. It is a type of operation line having zero amount changing but not zero value changing.
  • In order to calculate a release value an weighted average is used – just divide value by amount. But it is also possible to set release value manually just omitting this default behaviour.
  • All prices and values are fractions with two digits after decimal points.
  • Because items can be release in amount of fractions there could be a differences between delivery value and release value. For instance: we deliver 2kg with value 2.01, and then release 2kg with calculated weighed price 1.01 (or 1.00). After that we will get 0 amount and -.01 (or +0.01) value. To avoid such zombie items and keep changes in consistent manner  an additional operation line is generated reflecting only value change. 1) Line (delivery): amount 2.00 value 2.01 2) Line (value change) amount 0.00, value -0.01 3) Line (item release): amount -2.00 value -2.00. By applying this rather artificial solution we keep all history changes consistent without gaps or holes.
  • Solution allows multi-store management.  A document header for any item delivery should contain store identifier and all changes related to this delivery modify the item inside this particular store. In order to relocate item between stores an item should be released from the first store and deliver again to the another store.
  • Together with the operation lines also “item dashboard” is available. It contains the current item (meaning item delivery) status:  amount, amount available (amount – reservation) and value. This table can be recreated any time by iterating through the operation lines. When the item is released completely the corresponding item entry is deleted from the dashboard. But the item story is left in the operation documents.

Additional information and ideas
In a real business environment this basic solution should be enhanced by a warehouse document management. For instance:  item delivery can be the result of:

  • Item purchase
  • Correction
  • Complaint after selling. The customer returns the item and it is delivered again to the store of “complaint goods”.
  • In a factory an item just produced is delivered to the store of “finished goods”.
  • Item store relocation.
  • During physical inventory some items are discovered and should be included into the database.
  • Etc

Any situation like that is described by different documents created in different contexts. But in terms of the warehouse the basic operation is the same: an amount of item is delivered to the store. In the database design it could be an additional table (set of tables) having one-to-one or one-to-many relationship to the basic operation table.
Item description here is very simple – it is enough to have item ID and item CODE. But in a real environment more detailed item description is necessary. But it is not a problem to create a more descriptive table (set of tables) having one-to-one relationship with the basic table.
I believe that it is possible to develop more sophisticated solution just keeping this basic level untouched.
The solution identifies a person. In real environment something more sophisticated is necessary especially related to authorization and safety. Some persons could have a right to access a particular  store, item or are authorized to performed only a specified operation.
Freeze an item for a moment, for instance there is some uncertainty about its quality and we want to stop any activity around it for some time.
End of accounting year.  Move data related to completely released items to archive database. For item partially released move historical data to archive database, close old books and open new books.


Database design.
DDL code.
(Diagram done by IBM Data Studio, freely available here)


The database is a simple star schema, operation and operationlines are fact tables surrounded by dimensions (person, items and stores but more can be added). Itemdashboard is a transactional table containing current warehouse content.

Operation (document header)
CREATE TABLE "OPERATION" (
                "ID" BIGINT NOT NULL, 
                "OPDATE" DATE NOT NULL, 
                "STORE" BIGINT, 
                "PERSON" BIGINT NOT NULL
        );
  • ID - primary key, generated
  • OPDATE - date of operation
  • STORE - store identifier (matters only for item delivery)
  • PERSON - person who executes the operation
CREATE TABLE "OPERATIONLINE" (
                "ID" BIGINT NOT NULL , 
                "ITEM" BIGINT NOT NULL, 
                "AMOUNT" DECIMAL(15 , 4),
                "AMOUNTAVAILABLE" DECIMAL (15 , 4), 
                "VALUE" DECIMAL(20 , 2), 
                "REFERENCELINE" BIGINT, 
                "OPERATION" BIGINT NOT NULL,
        "SEQNUMBER" INTEGER NOT NULL
        ) ;
  • ID - primary key, generated
  • ITEM - item referenced
  • AMOUNT - + item amount increase, - item amount decrease
  • AMOUNTAVAILABLE -  change in amount available, for reservation only this column should be nonzero  and amount should be 0
  • VALUE - change in item value related to AMOUNT. Important: it is value of the whole increase/decrease, no a price of a single unit
  • REFERENCELINE - NULL only for item delivery, otherwise change in item delivered, foreign key to OPERATIONLINE
  • OPERATION - foreign key to operation
  • SQNUMBER - allows keep operation line in order, sequence number of the line in the document
CREATE TABLE "ITEMDASHBOARD" (
     "ITEMID" BIGINT NOT NULL,
     "DELIVERYLINE" BIGINT NOT NULL,
     "CURRENTAMOUNT" DECIMAL(15,4) NOT NULL,
     "CURRENTAVAILABLE" DECIMAL (15,4) NOT NULL,
     "CURRENTVALUE" DECIMAL (20,2)
) ;
  • ITEMID - item
  • DELIVERYLINE - foreign key to the operation line, item delivery line
  • CURRENTAMOUNT - current amount available (increases - decreases)
  • CURRENTAVAILABLE - CURRENTAMOUNT minus reservations
  • CURRENTVALUE

Temporary table used for probing and operation submitting.

CREATE GLOBAL TEMPORARY TABLE "TEMP_ITEM_CODES_LIST" (
  "SESSIONID" VARCHAR(128) NOT NULL,
  "SEQNUMBER" INTEGER NOT NULL,
  "ITEMCODE" CHAR(30) NOT NULL,
  "AMOUNT" DECIMAL(15 , 4) NOT NULL,
  "VALUE" DECIMAL (20,2)
 ) ON COMMIT PRESERVE ROWS;

  • SESSIONID
  • SEQNUMBER - sequential number of this line in document. It allows to keep an order of lines.
  • ITEMCODE - item code (not item id)
  • AMOUNT - amount of item being asked
  • VALUE - value of item being asked.

Temporary table used as a parameter for stock operation. Contains lines for performing item changing. Columns in this temporary table reflects columns in the operationline table.

CREATE GLOBAL TEMPORARY TABLE TEMP_ITEM_IDS_LIST (
  "SESSIONID" VARCHAR(128) NOT NULL,
  "SEQNUMBER" INTEGER NOT NULL,
  "ITEMID" BIGINT,
  "OPERATIONLINE" BIGINT,
  "AMOUNT" DECIMAL(15 , 4),
  "AMOUNTRESERVED" DECIMAL (15, 4),  
  "VALUE" DECIMAL(20 , 2) 
 ) ON COMMIT PRESERVE ROWS NOT LOGGED ON ROLLBACK PRESERVE ROWS;

Server code.
Server code SQL.
The idea is to create API in terms of stored procedure for performing database operation. It is described below. Because more than one item can be delivered (or released) in a single operation the temporary tables (CGTT – Created Global Temporary Tables) are used. Different copy of CGTT is handled for every connection. But in 3-tier environment often the same connection is shared between different users so also SESSIONID column is used for distinguishing data coming from different users.  
One SP is used for item probing (tell if it is possible to release or reserve an amount of item) and the second is an input for performing warehouse operation.
Here is briefly SP description (more details in SQL code comments)
  • GET_DELIVERY_ID (UDF) - resolves chain of REFRENCELINE columns to get source delivery line.
  • GET_LAST_ID (DF) - DB2 specific. Get last generated identity (autoincrement) column
  • LOGMESS (SP) - stored log message for logging purpose (DB2 autonomous block to keep safe against ROLLBACK)
  • CLEAR_TEMP_SESSION (SP) - clears temporary (parameter) files.
  • STOCK_SIGNAL - signal (exception), DB2 specific implementation
  • GET_ITEM_IDS_FOR_DELIVERY (SP) - source file is TEMP_ITEM_CODE_LIST. Output is TEMP_ITEM_IDS_LIST. Resolves item codes and make table ready for item delivery.
  • GET_ITEMS_FOR_RESERVATION - source and output as above. It is probing procedure which checks and answers if it is possible to reserve or release item. Resolver item code and amount into delivery lines. Can split one item into several deliveries. If it is not possible returns amount of item available now.
  • INNER_OPERATION_OP, OPERATION_OP - OPERATION_OP encloses INNER_OPERATION_OP to issue ROLLBACK command in case of failure. Source file is TEMP_ITEM_IDS_LIST. Applies changes from source file, makes operation document and updates DASHBOARDITEM table. Failure is expected even after running GET_ITEMS_FOR_RESERVATION because between probing (GET_ITEMS_FOR_RESERVATION) and OPERATION_OP another session can release some amount of item.
  • UPDATE_DASHBOARDITEM - used by OPERATION_OP and REFRESH_DASHBOARD. Updates ITEMDASHBOARD table.
  • REFRESH_DASHBOARD_TABLE - recreates ITEMDASHBOARD table by iterating through OPERATIONLINE.
Transaction
OPERATION_OP issues ROLLBACK in case of failure. But it is responsibility of the caller to send COMMIT command. Item release (operation) can be the part of a broader transaction so all changes can be covered in single transaction.
Safety
All modifications of fact tables (OPERATION and OPERATION line) and ITEMDASHBOARD are made by OPERATION_OP. Also all sql statement are static SQLs. So it is possible to grant for a client application only read-only access to the main tables does making the whole solution more safe.

Testing 
All tests are available here. Two types of tests were executed.
  • Functional tests (test1, test2, test3 and test4)
  • "Mass" test - look description. Test program (written in Python) creates randomly deliveries and releases of items. This test can be launched from many sessions simultaneously. Every test keeps locally changes expected and before quitting save expected results in TESTSUMMARY table. After test   statement searches for inconsistent entries - should yield 0 rows in case of success.
SQLAnywhere
Current version is available only for DB2 but it is planned also for SQLAnywhere. Because of planned migration I had to resign of several nice features not supported by SQLAnywhere.
  • Table, association tables.
  • CREATE TYPE - so there are a lot of repeated DECIMAL(20,2) in the SQL and DDL code.
  • Condition handler.
  • Only FOR loop is used.
  • Etc.
Conclusion

This solution is very simple but I believe that it is possible to surround it with more general solution just keeping this basic stock management as separated product and the root for other development.

Because keeping valid stock management is essential I believe that this approach makes sense. Regardless of the quality of the client application we can keep warehouse state consistent and valid for accounting and taxation purpose. All warehouse item changes are documented and consistent.

Future

  • XML documents as input or output
  • Next layer of software - document handling, marketing, sales
  • Sample client application based on this basic solution
  • WebService
  • SQLAnywehre and mySQL version

czwartek, 3 stycznia 2013

db2odbc_fdw, db2 and postgresql

Introduction
I created a new version of DB2/ODBC FDW extension. This version implements connection cache but previous functionality (without cache) is still maintained.
Source code and README is available here.
Problem
Without cache any reference to foreign table triggers a sequence: connect -> query execution -> disconnect. It is very convenient because you don't need to bother about connection to external dataserver but means significant performance degradation because establishing connection to a database (particularly if foreign database is hosted by other server) is very costly. It does not matter if you have relatively small number of connections and costly sql statement to execute but could matter otherwise.
So the new version keeps connection in a local cache and reuse it later. But it is related  to another problem. What to do if in the meantime connection to a foreign database is broken or foreign database is stopped or restarted ? ODBC does not have any method for probing if ODBC connection (SQLHDBC handler) is alive.
Solution
To define a cached connection an additional parameter is required. Example

CREATE SERVER db2odbc_servercached 
        FOREIGN DATA WRAPPER db2odbc_fdw 
        OPTIONS (dsn 'TSAMPLE' , cached '-30081');

Parameter 'cached' has double meaning:

  1. Defines 'cached' connection.
  2. Define the native exception code which triggers connection retry after query execution failure (-30081 in case of DB2). It allows to distinguish between sql failure caused by the broken connection and failure caused by another reason (for instance invalid query syntax). Value -1 means "retry always". Retry is done only once and the second failure fails the whole statement.
Test
I also created a set of regressive tests (in terms of BoaTester) just covering the basic functionality. The tests are available here.  The test scenarios are something like:
  1. Connect to DB2, create a table and insert several rows.
  2. Create a corresponding foreign table in Postgresql.
  3. Run a statement and verify that foreign table in Postgres has the same content as original.
  4. Run a sequence (100) of statement against foreign table to be sure that connection reuse is working.
Additional problem
The connection cache is implemented as a simple list and the beginning of the list is kept as a static variable in the module. It is not thread safe and further investigation is necessary.

Additional remarks, future
The solution has been tested against DB2 (version 10.1) but should work with any other database server. Additional tests will be performed in the future.
In case of DB2 also test against SP (stored procedure) returning result set will be performed.