Spring Integration in Hybris Cluster
This article is brought by Sergei Aksenenko, EPAM Lead software engineer |
- Order of messages. For example, Node 1 takes the first message, Node 2 takes the second; However, Node 1 is busy and the message is queued, that results in processing of the second message before the first one. If the order of the message is important, it may create issues.
- Race condition and concurrent use of the resources. There can be conflicts with models and DB even without ordering issue, just receive 2 messages in which the same entity created and get a conflict when trying to save models.
- Shared data. Using read-only file system channels requires shared metadata implementation.
- Active – Passive (by making failover if the master node failed)
- Active – Active (concurrent consumers – messages from dependent channels within one poll still receive single node)
- Using hybris task engine instead of pollers for inbound channels,
- Create custom metadata store based on hybris models,
- Make system resisted to any failures during message processing.
Spring Integration within Hybris Cluster
Let’s take a closer look how Spring Integration FTP Inbound Channel implemented. There is self-descriptive XML configuration that used for initializing:<int-ftp:inbound-channel-adapter id="ftpInbound" channel="ftpChannel" session-factory="ftpSessionFactory" auto-create-local-directory="true" delete-remote-files="true" filename-pattern="*.txt" remote-directory="some/remote/path" remote-file-separator="/" preserve-timestamp="true" local-filename-generator-expression="#this.toUpperCase() + '.a'" local-filter="myFilter" temporary-file-suffix=".writing" local-directory=".">
<int:poller fixed-rate="1000"/>
</int-ftp:inbound-channel-adapter>
<int:poller fixed-rate="1000"/>
</int-ftp:inbound-channel-adapter>
<bean id="ftpInboundFileSynchronizer" class="org.springframework.integration.ftp.inbound.FtpInboundFileSynchronizer">
<constructor-arg ref="ftpSessionFactory"/>
<property name="remoteDirectory" value="some/remote/path"/>
<property name="remoteFileSeparator" value="/"/>
<property name="filter" ref="myRemoteFilter"/>
<property name="preserveTimestamp" value="true"/>
<property name="temporaryFileSuffix" value=".writing"/>
</bean>
<bean id="ftpSynchronizingMessageSource" class="org.springframework.integration.ftp.inbound.FtpInboundFileSynchronizingMessageSource">
<constructor-arg ref="ftpInboundFileSynchronizer"/>
<property name="autoCreateLocalDirectory" value="true"/>
<property name="localFilter" ref="myFilter"/>
<property name="localDirectory" value="."/>
</bean>
<constructor-arg ref="ftpSessionFactory"/>
<property name="remoteDirectory" value="some/remote/path"/>
<property name="remoteFileSeparator" value="/"/>
<property name="filter" ref="myRemoteFilter"/>
<property name="preserveTimestamp" value="true"/>
<property name="temporaryFileSuffix" value=".writing"/>
</bean>
<bean id="ftpSynchronizingMessageSource" class="org.springframework.integration.ftp.inbound.FtpInboundFileSynchronizingMessageSource">
<constructor-arg ref="ftpInboundFileSynchronizer"/>
<property name="autoCreateLocalDirectory" value="true"/>
<property name="localFilter" ref="myFilter"/>
<property name="localDirectory" value="."/>
</bean>
Message<File> message;
while ((message = messageSource.receive()) != null) {
messageChannel.send(message);
}
while ((message = messageSource.receive()) != null) {
messageChannel.send(message);
}
Metadata Store
Another problem can arise if you work with read-only remote sources, like FTP where you need to monitor new files and you have no ability to move them to the archive folder or remove. In order to track the processed files, Spring Integration uses Metadata (essentially key value pairs, file name and timestamp) and MetadataStore. Out-of-the-box Spring Integration provides the following implementations of that: property based, Redis, Mongo, Zookeeper and Gemfire. Spring 5.0 will support JDBC metadata store. It may play better with hybris, but it seems it will take time while SAP upgrades the product to the new Spring version. With the current version, we can make a custom implementation based on hybris persistence layer. For that we need two things:- define new type in items.xml
<itemtype code="HybrisMetadata" generate="true" autocreate="true">
<deployment table="hybris_metadata" typecode="14444"/>
<attributes>
<attribute qualifier="key" type="java.lang.String">
<persistence type="property"/>
<description>Metadata key</description>
<modifiers optional="false" unique="true" initial="true"/>
</attribute>
<attribute qualifier="value" type="java.lang.String">
<persistence type="property"/>
<description>Metadata value</description>
<modifiers optional="false" initial="true"/>
</attribute>
</attributes>
<indexes>
<index name="Metadata_Key" unique="true">
<key attribute="key"/>
</index>
</indexes>
</itemtype> - create an implementation of ConcurrentMetadataStore interface based on model service and DAO for working with hybris models. There is nothing special just implementing several methods: put, putIfAbsent, replace, get, remove.
Ruslan
20 September 2017 at 08:56
Great article! Thank you.
BTW, the link that is supposed to lead to ‘holfolders’ is dead.
sergeiaksenenko
21 September 2017 at 07:26
Thanks Ruslan, the link is updated.