Es werden Posts vom Februar, 2018 angezeigt.

Test your available CPU resources in karaf

Times ago we start developing a fun tool to provoke extreme situations like out of memory or stack trace exception.  Now I added a new feature to stress the CPU. I also count the used time on the CPU to see how much CPU time is 'available'. Using the tool is a good indicator to show how much CPU time is available for the JVM. Specially in virtual environments this information is useful. But you need to set parameters carefully. Don't stress the CPU too long at once and use a sleep between the test cycles. So the hypervisor is not moving other VM guests out of your system and the result is valide. Naughty: Stress the CPU for a longer time and the VM host is yours. Install mhus-osgi-commands and use: shityo stress This example from my laptop: karaf @root()> shityo stress threads=8 Used cpu nanoseconds per second ... 1: [8] 995M 982M 992M 994M 993M 996M 964M 996M = 7.916G 2: [8] 994M 995M 992M 996M 990M 994M 995M 993M = 7.951G 3: [8] 999M 998M 999M 99

Karaf Maven Tools performance issue in Eclipse IDE

If you develop karaf related bundles in Eclipse IDE or another Java IDE the common way is to create maven driven projects and let maven manage dependencies and build. Since karaf 4.0 you need to use a special maven plugin to parse the classes to find and automatic register Services. The plugin is called 'karaf-services-maven-plugin' and is working for every build. Eclipse is using 'Maven Builder' to organise and build java classes in the background. So you can see errors while you are working on the files and detect compile problems fast. Therefore the Maven Builder is called for every 'Automatic Project Build' e. g. if you save files, start Eclipse and start exports / maven builds. I found that the performance of the automatic build rapide slow down if I start using the maven plugin. In fact every saving need 60 seconds. I have 48 maven related project in my workspace. Starting of eclipse keeps me from work for minimum 30 minutes! Not very happy about th

Java: Find Method Reference Names

Since Java 8 it's possible to use references to methods like it is in most other languages already standard. The notation is Class::MethodName, e.g. MyClass::myMethod But the new Feature is not what it promise. The developer would expect to get a reference object like Method to work with the referenced method. But Java did not implement real References it packs the reference in a calling lambda expression. Something like (o) -> o.myMethod() and returns a reference to this lambda construct. What a stupid behaviour! In this way it's not possible to get any information about the referenced method. Not the name or expected return type etc. Short: The solution is to analyse the byte code and grab the method name out of it. Like it's done here: Long:

How to order node children

Currently I store the child order information in the child properties. This means the child hold a 'sort' property which shows how to to sort this node into the list of children nodes. This strategy shows a lot of problems. First of all if I try to change the order I have to change all child nodes. This could end in a access denied problem if I do not have access to one of the child nodes. Second if I move the child node a stare 'sort' parameter will maybe disturb the new order information. Therefore the best and mostly not used strategy is to store the order information at the paren node. If you are able to write the parent node, you are able to reorder the children of the node. And you are not forced to change these ones. I should implement it in cherry web soon in this way!

Resolving a Renderer

Resolving a renderer for a resource is not as simple as it seams. The team from apache sling shows me that the rendering is more complex and should be more then a simple content output. In modern WCM a resource is an abstract thing containing more meta data then pure content. All the  meta data together brings a useful content to the user. And there are different ways to present it. Html is the visible presentation of the data. Json and xml are technical presentations needed to download data in the background. Sling shows that we can have different renderer for the same content. It depends on the current use case. How to find the correct content renderer is an interesting question. Sling are using request parameters like 'request method' and parts of the requested path to find a resource. Parameters from the resource are linking to the correct script rendering the content ( see this picture ).

Bonita Sub-Processes

Playing around with Bonita sub-processes gave me a couple of interesting discoveries... The first step was to create a sub-process by selecting tasks and using the context menu to create a new sub-process. A sub-process is a closed process connected to the main process by an interface. But before a list of founding using the 'create subprocess' function. The new process lacks of a lane. It's easy to create and you should do it to define a default actor. The new process lacks of a start and end point. It's working without but for consistency and a defined flow you should create them. Every Task will be renamed to 'Copy of '. That's ugly. The ned process is disconnected from the main process. This means different variables. You need to map the in and out variable mapping to transfer date between the processes. This will not be done automatically after creating the sub-process. But it will be done at creation time. But the mapping is not correct at all.

BPM Error Handling Best Practice

Creating Business Processes using a BPM (in my case Bonitasoft BPM ) we had the problem to handle failures in the right way. In the first time we tried to catch all errors and handled them with a End/Terminate-Entity. Looking backward it was a odd way to process the exception states. The focus should be by maintenance and at most by the customers using the system. Customers don't want to reinitialize a process every time an error occurred. The want to maintainer to fixe it and let the process flow. Maintainer don't want to have much work with processes running. I should show two very common scenarios happen in the real life: All processes are based on tasks using operations working over the network. Maybe sending mails using a database etc. A lost of network connection (maybe only a segment) will cause a lot of tasks to fail and trigger the error handling. User is creating a process instance inserting data that's simply wrong. But the data can only be validated later

Migrate from Karaf 3 to 4, Part 2

Migrating from karaf 3 to 4 another funny thing happened. All my jdbc datasources, configured in the deploy folder, where gone. In the first moment I was very hysteric because we want to migrate the productive environment the next days. But in the next moment I recognized that we did all the test cases without an impact. Playing around and in the end a deeper look into karaf sources showed me the solution. The new commands provided by karaf are using a more complex query and filter to find jdbc datasources. The new command jdbc:ds-list need a property 'dataSourceName' to be defined on the service to show the datasource in the list. The datasource itself was present like before but not shown. First I reimplemented the old command jdbc:datasources and showed all the datasources present as they are, by implemented interface (mhus-osgi-tools). Then I changed all the blueprint xml files and append the claimed property <entry key="dataSourceName" value="${name

Shell: Simple bundle watch list

Creating a watchlist could be laboriously (what a word :-/). Therefore using shell scripting could help a lot. The first sample shows how to grep a list of interesting bundles to watch. In my case it's all mhu-lib bundles (add '--color never' to avoid creation of violating escape sequences): karaf@root()> bundle:list|grep --color never mhu-lib  89 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-annotations  90 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-core  91 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-jms  92 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-karaf  93 | Active |  80 | 3.3.0.SNAPSHOT     | mhu-lib-persistence karaf@root()> I only need the bundle names, so cut the last column out of the result: karaf@root()> bundle:list|grep --color never mhu-lib|cut -d '\|' -f 4 -t mhu-lib-annotations mhu-lib-core mhu-lib-jms mhu-lib-karaf mhu-lib-persistence karaf@root()> Now we need to parse it line by line. A loop would help. The results a

Karaf: Scheduling GoGo Commands Via Blueprint

A new feature with mhu-lib 3.3 is the karaf scheduling service. The service is designed to be configured by blueprint and executes gogo-shell scripts. In this way you are able to execute every regular maintenance by automation. Use this sample blueprint to print a hello world for every 2 minutes: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="">     <bean id="cmd" class="" init-method="init" destroy-method="destroy">       <property name="name" value="cmd_hello"/>       <property name="interval" value="*/2 * * * *"/>       <property name="command" value="echo 'hello world!'"/>       <property name="timerFactory" ref="TimerFactoryRef" />     </bean>     <reference

Migrate shell commands from Karaf 3 to Karaf 4

Today the migration from Karaf 3 to version 4 brings some new interesting effects. One of them is a full yellow 'blinking' source code where shell commands are implemented. It looks like all the shell interfaces from version 3 are deprecated now. The reason is that the developers want to define commands without using blueprint definition files in the OSGI INF folder any more. But to establish the new way a new interface is created and in focus. To use the new interface you first  have to change the maven configuration of your project. Add the following parameters: <felix.plugin.version>3.0.1</felix.plugin.version> <maven.version>2.0.9</maven.version> And the following parts inside your main pom.xml: <dependencyManagement> <dependencies> <dependency> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>${felix.plugin.version}</version> </dependency&g

POJO handling with mhu-lib

mhu-lib brings a full featured POJO handler. The framework is able to parse POJO objects uses the found attributes to get and set values. Also a toolset to transform JSON or XML structures directly into/from POJO objects. It's al located in the package 'de.mhus.lib.core.pojo'. The base class is the POJO Parser. It's not working a lot but it brings all together. First important choice is how to parse. Parse strategies are implemented by default looking for attributes (AttributesStrategy) or functions (FunctionsStrategy). The default strategy (DefaultStrategy) combines both but it's possible to change the strategy object for the parser. Strategies are also looking for the @Embedded annotation and parse deeper inside this attributes. Important: The attribute based Strategy is also able to access 'private' declared values! Is no need to declare them all 'public'. The strategy creates a PojoModel which can be manipulated by filters. The default filter

Parameter Related Classes Tree

In mhu-lib there is a general attention to properties or attribute related objects. The implementation follows the philosophy that most thirst are attribute related and that is should be handled as the same. Properties and attributes are handled as the same not because they are the same but they have the same behavior. First of all the IProperties class (since mhu-lib 3.3 a real interface) define a the basic behavior to set and get different value types. All java primitives are supported and the 'Object' type. The default implementation (e.g. MProperties) uses the getObject() variant and cast the object to the asked primitive by using the 'MCast' utilities. This simple structure is a flat properties store. The 'ResourceNode' and 'WritableResourceNode' extends the structure to be a tree. With 'getNodes()' or 'getNode(key)' or 'getParent()' it is possible to traverse thru the tree structure. An interesting extension of '

What is mhus-lib for?

This article is about the need of mhus-lib and a short history to explain. It also describes the different sections shortly (First of all it's a library for the java VM). The lib is called by my name because originally it was designed as a tool to solve problems I have in every project in the same way. The first version (in this time called mhu-lib) was a small set of static classes to convert data or to load simple information like the current host name. In the next years the library was growing also solving more complex problems like how to handle different configurations in a common way. Handle POJOs and database access. It learned to serialize object into a RMDB and how to define forms in a generally way.  Also a common logging framework I really love was implemented. That was version 2. In version 3 all the things are becoming a more common and integrated touch. Package declarations changed and the library is now OSGi usable. But why not using one of the common frameworks

Fresh command shell:cut

Just added the command 'shell:cut' to allow splitting of lines. The command is more flexible as the original shell command and allows spliting by regular expressions. This is the option list: -r Replace regex -e Regular expression -d Seperate parts by this regex -t Trim every single part -f Fields expression -j Glue between fields -p Positions Field expressions are a comma separated list of fields or field ranges. The first field is zero '0'. Examples: bundle:list|cut -d \\\| -t -j ' ' -f 1,3,2 Active 80 0.0.0 ... bundle:list|cut -d \\\| -t -j ' ' -f 1-,0 Active 80 1.4 Commons DBCP 71 ... bundle:list|cut -d \\\| -t -j ' ' -f 1,abc,2 Active abc 80 ... The option -p followed by a list of ranges in the line. Examples: bundle:list|cut -j '&' -p 10-20,1-10,abc ve |  80 |&69 | Acti&abc ... Every positions definition can't be out of boundls. In the worst case it will be ignored. An empty line - without written fi

Scripting karaf gogo shell with mhu-osgi

Scripting karaf shell could be very constricted. Therefor an extension could help to work more with the possibilities of the karaf engine. The main reason to enable more scripting are the possibility to find easy maintenance solutions without starting a software project for every small task. (The same reason bash become popular.) The main extensions are 'shell:bash' and 'shell:run'. shell:bash This extension only works for unix systems. It clones the 'shell:exec' command and makes it easier to execute bash scripts and bring it together with gogo shell. To use it you also need the commands 'shell:read' and 'shell:write' to work with files. The commands will be discussed later. The following example writes the output of bundle:list to a file and executes bash script. bundle:list|write test.txt bash "cat test.txt|cut -d \\| -f 5" shell:run More helpful will be the 'run' command. You can run scripts that can be extended w

Installing mhu tools for karaf

The best choice for developers to install mhu tools is to use the current snapshot. With the snapshot the most of the discussed topics are working. I will make a note of the used SNAPSHOT you can use it or use the following release. If you use the relese skip the following description and start with the karaf installation. Build the current Snapshot Get the Sources git clone git clone Compile it into the local repository cd mhus-lib mvn install cd .. cd mhus-osgi-tools mvn install Karaf installation Setup Karaf Download Karaf 3.0.5 from the Website and unpack the zip bundle. Now change directory into the karaf directory 'apache-karaf-3.0.5' and start karaf for the first time ./bin/karaf and dtop it with 'logout'. If you need help with proxys or other problems then ask duckduckgo . Basic Setup Route the loggin