javahotel

Blog do projektu Open Source JavaHotel

sobota, 9 września 2017

DB2, UTL_ENCODE, BASE64_DECODE, BASE64_ENCODE

I created an implementation of the two methods from Oracle UTL_ENCODE package.  It is implemented as UDF Java function and DB2 module. In Java, it is simply a utilization of JVM Base64 package. More time consuming was preparing DB2 signature and test according to this article.
Full source code is here.

niedziela, 27 sierpnia 2017

Civilization The Board Game

Introduction
For some time, I became a fan of Civilization The Board Game. I found it more engaging, dynamic and enthralling than the computer game. It is like comparing real melee combat with the bureaucratic war waged behind the office desk.
And an idea stirred me up to move the game to the computer screen. Avoid the stuff piling up on the table, train and test ideas without spreading the board game and allow players in remote locations to fight.
For the time being, my idea ended up in two projects.
CivilizationEngine here
Civilization UI here
Demo version on Heroku: https://civilizationboardgame.herokuapp.com/  (wait a moment until dyno is activated, it is a free quota).
Every project comes with its own build.xml file allowing creation of target artifact.
General design principles
The solution consists of two separate projects: Civilization Engine and Civilization UI. I decided that all game logic and state is managed by back end engine. The UI, as the name suggests, is focused only on displaying the board game and allowing the user to execute a command. The command is sent to the server, server changes the game state and UI is receiving the current game state and updates the screen.
The game is nothing more like moving from one game state to another. Every change is triggered by the command. At every moment, it is possible to restore the current game state by setting the initial board and replaying all commands up to the point.
Data is transmitted between engine and UI in JSON format.
Civilization Engine
Civilization Engine is created as IntelliJ IDEA Scala project, can be imported directly from GitHub.
Why Scala? I found it very appropriate here. Most of the operations are related to list walking through, list looking up, filtering, mapping and Scala is an excellent tool for that. If I decided to use Java probably the code would pump up twice even with Java8 streaming features.
I'm very fond of this command (full source) :
  def itemizeForSetSity(b: GameBoard, civ: Civilization.T): Seq[P] =
    getFigures(b, civ).filter(_.s.figures.numberofScouts > 0).map(_.p).filter(p => SetCityAction.verifySetCity(b, civ, p, Command.SETCITY).isEmpty)
It yields all points where a new city can be set.

  • Find all figures on the board belonging to a civilization
  • Single out squares with at least one scout
  • Map squares to points
  • Verify if the point is eligible for city setting using SetCityAction.verifySetCity
All stuff in a single line.
A general outline of the project
  • resources, game objects (JSON format) used in the game: tiles, squares, objects (now TECHNOLOGIES only)
  • gameboard , class definitions
  • objects , enumerations and classes related to game artifacts
  • helper, game logic, I found more convenient to put them as helper, object class then as methods in Gameboard class.
  • io, methods regarding reading and writing data in JSON format. I'm using a dependency PlayJSON package.
  • I, external interface
Brief interface description
  • getData(LISTOFCIV), list of civilizations available
  • getData(REGISTEOWNER), generates new game and returns unique token to be used in further communication
  • getData(GETBOARDGAME), returns current game state
  • executeCommand,  executes next command
  • itemizeCommand, provides all possible parameters for a particular command. For instance: for StartOfMove it brings all points where figure movement are allowed to commence.
So far, only a few commands are implemented
  • SetCapital
  • SetArmy
  • SetScout
  • EnfOfPhase
  • BuyScout
  • BytArmy
  • MoveFigure
  • RevealTile
  • SetCity
User Interface
For the time being, so ugly that only a mother or father could love it. More details: look here.

Next steps
Implementation of game persistence. Because of Heroku limitation, I cannot use disk file system as a mean. I'm planning to use Redis, there is a free quota for this service in Heroku. Redis will be used to store the games and also as a cache. This way, the server part will be completely stateless. Every step will consist of restoring the game from Redis, executing a command and storing the updated game to Redis again.

poniedziałek, 31 lipca 2017

Hosts, simple bash tool to run commands on several hosts

BigInsights  (IBM Hadoop) requires a number of prerequisites to have the cluster consistent. For instance: /etc/hosts file should be the same on all hosts. There are plenty of tools available, but finally, I decided to create a small tool on my own.
The tool is available here (branch hosts).
There are several simple bash procedures allowing copying and executing a single command on all hosts in the cluster. For instance: install a required package using yum command on all hosts.
Basically, two main tasks are implemented:

  • Share file across hosts (for instance /etc/hosts)
  • Run a single command on all hosts (for instance yum install package)
These two simple tools fulfill almost all tasks necessary to prepare and run the multi-host installation of BigInsights and IBM streams.
More details and description here.

sobota, 22 lipca 2017

Dockerize IBM Streams

It is very convenient to run IBM Streams in Docker container to avoid huge VM overhead. There is one project available, but my plan is not so ambitious.
The solution is described here, Dockerfile file is also available there. It is not full automation, it is rather several pieces of advice how to set up Docker container with running IBM Streams domain and instance inside, just lightweight Virtual Machine easy to set up and calm down.
But it comes with one serious limitation. Only single host standalone installation is possible. Multihost installation requires resolving IP-DNS mapping and I failed to overcome this problem.
But standalone installation is enough for developing, testing and evaluation. I will keep going to support multihost also.

sobota, 17 czerwca 2017

Dockerize DB2

Sometimes it is necessary to set up and remove DB2 instance quickly. So far, I was using KVM virtual machine with DB2 preinstalled. It works nicely, but a virtual machine, even KVM, comes with a huge and heavy overhead.
Another solution is to use docker. Docker can be used as a lightweight virtual machine, with much smaller footprint than a full-fledged virtual machine.
Here I'm describing the steps to run DB2 in a docker container. In this example, free DB2 Express-C edition is used but the pattern can be extended to any other DB2 edition.
After completing these simple steps, I have low profile DB2 instance ready to start and stop anytime if necessary.

Several pending tasks.
  • DB2 installation is performed manually. It is possible to automate this process through Dockerfile although the procedure is different depending on the DB2 edition.
  • DB2 instance should be started manually every time the container is restarted. Looking for the way to run it automatically.

wtorek, 30 maja 2017

BigInsights, docker

Problem
I've spent some time trying to dockerize BigInsights, IBM Open Platform. After resolving some issues, I was able to perform installation. Everything run smoothly except Spark installation. Although installation was reported as successful, Spark History Server did not start.

 File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 424, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 265, in action_delayed
    self._assert_valid()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 243, in _assert_valid
    raise Fail(format("Source {source} doesn't exist"))
resource_management.core.exceptions.Fail: Source /usr/iop/current/spark-historyserver/lib/spark-assembly.jar doesn't exist
It turned out that spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm did not unpack all files included. Some directories: /usr/iop/4.2.0.0/spark/lib and /usr/iop/4.2.0.0/spark/sbin were skipped. What is more interesting, while installing the package using rpm command directly, rpm -i spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm, all content of rpm was extracted correctly, while using yum command, yum install spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm, some directories were excluded without signaling any error. I spent sleepless night trying to get a clue.
Solution
I found the explanation here. There was a mistake in spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm package. Some files in the rpm were marked as 'documentation'. It was revealed by running rpm --dump command.
rpm -qp --dump spark-core_4_2_0_0-1.6.1_IBM-000000.el7.noarch.rpm /usr/iop/4.2.0.0/spark/sbin/start-shuffle-service.sh 1279 1466126392 dfe89bfa493c263e4daa8217a9f22db12d6e9a9e1b161c5733acddc5d6b6498c 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/start-slave.sh 3151 1466126392 623bc623a3c92394cd4b44699ea3ab78b049149f10ee4b6f41d30ab2859f8395 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/start-slaves.sh 2061 1466126391 24f329f4cd7c48b8cbd52e87b33e1e17228b5ff97f1bcb5b403e1b538b17e32a 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/start-thriftserver.sh 1824 1466126392 fcef75ab00ef295ade0c926f584902291b3c06131dcb88786a5899e48de12bae 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-all.sh 1478 1466126392 efb2dc4fafed8d94d652c8cfd81f6ba59de6e9c6ae04da2e234e291f867f1d41 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-history-server.sh 1056 1466126393 8f74163405d9832f7f930ed00582dd89f3e6ffc1c6f3750e3a4a1639c63593ae 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-master.sh 1220 1466126391 ba5058a39699ae4d478dc1821fc999f032754b476193896991100761cd847710 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-mesos-dispatcher.sh 1112 1466126393 b30ce7366e5945f6c02494ce402bcebe5573c423d5eed646b0efc37a2dbc4a8c 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-mesos-shuffle-service.sh 1084 1466126393 6da69a8927513ed32fdb2d8088e3971596201595a84c9617aa1bdeefd0ef8de7 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-shuffle-service.sh 1067 1466126391 817ef1a4679c22a9bc3f182ee3e0282001ab23c1c533c12db3d0597abad81d58 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-slave.sh 1557 1466126392 cd0e35cd11b3452e902e117226e1ee851fc2cb7e2fcce8549c1c4f4ef591173e 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-slaves.sh 1298 1466126392 a3366c8ab6b142eb7caf46129db2e73e610a3689e3c3005023755212eb5c008c 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/sbin/stop-thriftserver.sh 1066 1466126391 53b9e9a886c03701d7b1973d2c4448c484de2b5860959f7824e83c4c2a48170b 0100755 root root 0 1 0 X /usr/iop/4.2.0.0/spark/work 19 1466127922 0000000000000000000000000000000000000000000000000000000000000000 0120777 root root 0 1 0 /var/run/spark/work /var/lib/spark 6 1466127905 0000000000000000000000000000000000000000000000000000000000000000 040755 spark spark 0 0 0 X /var/log/spark 6 1466127905 0000000000000000000000000000000000000000000000000000000000000000 040755 spark spark 0 0 0 X /var/run/spark 17 1466127905 0000000000000000000000000000000000000000000000000000000000000000 040755 spark spark 0 0 0 X /var/run/spark/work 6 1466127905 0000000000000000000000000000000000000000000000000000000000000000 040755 spark spark 0 0 0 X

The signature: root root 0 1 0 (mark 1) describes the file as "documentation". To shrink the space consumed by packages, the docker "centos" image contains "tsflags=nodocs" feature in /etc/yum.conf configuration file.
So the temporary workaround is to comment out this feature. To avoid loading unnecessary documentation, one can install Spark separately and have this patch in force only during the installation of this component.

środa, 10 maja 2017

Sqoop, Hive, load data incrementally

Introduction
Hive is a popular, SQL-like engine over HDFS data and Sqoop is a tool to transfer data from external RDBMS tables into HDFS. Sqoop simply runs SELECT query against RDBS table and the result is stored in HDFS or as a Hive table directly. After the first loading, the effective way to keep tables synchronized is to update Hive table incrementally in order to avoid moving all data again and again. Theoretically,  the task is simple. Assuming that external table has a primary key and source data are not updated or deleted, take the greatest key already inserted into Hive table and transfer only rows whose primary keys are greater than this threshold.
There is also an additional requirement. A very effective data format for Hive tables is Parquet but Sqoop can only create Hive tables in text format. There is --as-parquetfile Sqoop parameter but I failed to try to enable it for Hive tables.
Solution
The solution is uploaded here.
I decided to implement a two-hop solution. Firstly load delta rows into a staging table in text format using Sqoop and afterward insert rows into the target Parquet Hive table. The whole workflow can be described as follows:
  • Recognize if the target Hive table exists already. If yes, calculate the maximum value for the primary key.
  • Extract from external RDBMS table all rows with the primary key greater than maximum or the whole table if the Hive table does not exist yet. Store data into the staging table.
  • If the target Hive table does not exist, create the table in Parquet format. Execute Hive command "CREATE .. TABLE AS SELECT * FROM stage.table
  • If the target Hive table is created already, simply add new rows with command: INSERT INTO TABLE .. SELECT * FROM stage.table
The solution is implemented as Oozie workflow. Can be launched as a single Oozie task or as Oozie coordinator task. Sample shell scripts for both tasks are available here. common.properties file is used as a template for job.properties and coordinator.properties file.