Implementing Product Whitelisting/Blacklisting in SAP Commerce Cloud for Large Product and Customer Bases


Introduction

Document-level access control ensures that the search results have only those products that a logged customer is authorized to see. This is a common request for B2B solutions with a large and sophisticated product and customer models. Many manufacturers and suppliers want to provide exclusive or restrictive access to products for particular partners. Such an approach reduces the number of incorrect or incomplete orders and makes navigation easier. In this article, we are discussing in detail how to support product whitelisting/blacklisting per customer in SAP Commerce Cloud for large product sets and large customer base. We also present our solution and possible alternatives. The tests showed that the solution is capable to process millions of documents, tens thousands of customers and millions of access rules saying what product is blacklisted/whitelisted for what customer. This article is a collaborative effort of EPAM solution architects,

Problem definition

SAP Commerce has limited support for fine-grained product access control access to work on large amounts of data. It is represented by Product Visibility configured on the level of categories which is limited to only work with the persistence layer but not through search and Apache SOLR. Another approach is Item-level access control. Both work fine when the amounts of data involved are relatively small. When it comes to millions of products and customers, the out-of-the-box solution won’t fit. There are many facets and specific details in the original task we needed to take into account for solutioning. In this article, we discuss only one particular problem in isolation.
  • The Access Control Lists (ACL), the rules saying what product is whitelisted/blacklisted for what customers, are provided by the external system via data integration. The integration details are out of scope in the context of the article.
  • The key challenge is how to implement product search. For other components, the solution is trivial.
  • There are 1,000,000 products (P) in the product catalog.
  • There are 1,000 product groups (PG)
  • There are 30,000 customers (C)
  • There are 5,000 customer groups (CG).
  • There are 2,000,000 rules (C<->P, CG<->P, C<->PG, CG<->PG)
The goal is to work out an effective way hot to store and handle visibility rules full reload as quickly as possible.

Solution

As you know, SAP Commerce Cloud is tightly integrated with Apache SOLR. This search engine is used not only for full text search but also for populating product listing pages. There is no easy way to implement required functionality by re-configuring SAP Commerce or Apache Solr. Additionally, because of the cloud nature of the new SAP Commerce and limitations which come with that, adding new third-party software capable to support document-level access is also not a solution. Apache SOLR, as well as many other search engines, uses an inverted index, a central component of the almost all search engines and a key concept in Information Retrieval. Both full text search and facets are built on top of the inverted index. The limitations of the search engines are originated from the limitations of the inverted index. The simplest and straightforward approach is listing relevant customer groups or customers in the designated product attribute and use it for the facet filtering by putting a customer id or customer group into the hidden facet. At the indexing phase, these ids are considered as terms for SOLR. However, it was obvious to us that such a straightforward approach won’t work with the millions of products and tens of thousands of customers and customer groups. In this document, we’ll use the abbreviation ACL (Access Control List) to represent a list of customers and customer groups that can access a product or product group. Products have an ACL associated with them. The list is non-ordered. There are separate lists for allow and disallow rule groups. There are four topics we needed to study:
  1. ACL format
  • ACL items, their order, and format.
  1. Where to store the ACLs
  • Should we store the ACL field along with other product information?
  1. How to store the ACLs
  • What changes should we make to the SOLR configuration?
  • How should the field type be configured in SOLR schema.xml for performance and scalability?
  1. What changes should we make in SAP Commerce? How scalable is the solution after making these changes?

ACL format

An ACL specifies allowed and disallowed customers as well as allowed and disallowed customer groups. Each customer or group is represented by a unique ID, up to eight characters in length. The order of the items doesn’t matter. There are two types of ACL:
  • whitelist
  • blacklist
So the simplest list can look like “C12,CG23,C45” (comma-separated) or “C12 CG23 C45” (whitespace-separated). In our tests, we used whitespace-separated strings.

How to store ACL

We experimented with two methods of storing ACLs for products:
  • in the Apache Solr
  • in the Redis DB
Both are elaborated below.

Apache Solr: StrField type vs TextField type

To store ACLs in Solr, we need to find the field type that would be best to store a simple list of IDs. Apache SOLR out-the-box provides two types for the text fields: The major difference between solr.StrField and solr.TextField is that the solr.StrField cannot have any tokenization, analysis or filters applied, and will only give results for exact matches. SAP Commerce schema.xml defines two field types with those classes:
<fieldType name="string" class="solr.StrField" docValues="true" sortMissingLast="true"/>

…

<fieldType name="text" class="solr.TextField" positionIncrementGap="100">

<analyzer type="index">

<tokenizer class="solr.StandardTokenizerFactory" />

<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />

<filter class="solr.LowerCaseFilterFactory" />

<filter class="solr.RemoveDuplicatesTokenFilterFactory" />

</analyzer>

<analyzer type="query">

<tokenizer class="solr.StandardTokenizerFactory" />

<filter class="solr.SynonymGraphFilterFactory" synonyms="synonyms.txt" ignoreCase="true" expand="true" />

<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt" />

<filter class="solr.LowerCaseFilterFactory" />

<filter class="solr.RemoveDuplicatesTokenFilterFactory" />

</analyzer>

</fieldType>

…

<dynamicField name="*_string" type="string" indexed="true" stored="true" />

<dynamicField name="*_string_mv" type="string" indexed="true" stored="true" multiValued="true" />

…

<dynamicField name="*_text" type="text" indexed="true" stored="true" />

<dynamicField name="*_text_mv" type="text" indexed="true" stored="true" multiValued="true" />

<dynamicField name="*_text_en" type="text_en" indexed="true" stored="true" />

<dynamicField name="*_text_en_mv" type="text_en" indexed="true" stored="true" multiValued="true" />

…
It is critical to draw attention to the statement docValues=”true” in the string type definition. According to Apache SOLR documentation, DocValues is a parameter that indicates should SOLR server use the column-oriented approach with a document-to-value mapping built at index time or standard row-oriented. In other words, values of DocValue fields are densely packed into columns instead of sparsely stored like they are with stored fields. This feature was added to Lucene 4.0 to improve performance for faceting, sorting and highlighting. The faceting engine, for example, needs to look up each term that appears in each document that makes up the result set and pull the document IDs in order to build a list of facets. Of course, DocValues consumes significantly more memory than the regular inverted indexed type. Before saying how we used DocValues in our final solution, let’s have a look at the tests and experiments we conducted to get more inputs and insights.
Load Tests
The purpose of this test is having a ballpark estimations of SOLR indexing performance for a large set of data with different text field types. We generated a test set with random ACL field values, and index the set using curl (see Uploading data with index handlers) on the regular MacBook. We needed a rough estimate and relative numbers. We used a standard SAP Commerce schema and server configuration (including JVM settings). The setup includes:
  • 2.2 GHz Intel Core i7, 16Gb, MacOS X
  • 1,000,000 products
  • SOLR attribute containing a comma-separated list of customers/customer groups
    • Allowed to access the item
    • Not allowed to access the item
// Technically, for the simplified task, we need only one list, either allowed or not allowed. We used in our tests both of them because the actual business need of the client is more complex than explained in the task above. The challenge is originated from the fact that the same customer or customer group or their subgroups can address the items as used in both lists and it creates a new layer of complexity. There is a need to prioritize one rule against another and use constraints against both lists at the query phase by using the full power of SOLR filtering engine. We’ll come back to this point later in the article. Conceptually, we decided to generate and load the following structure:
Product ID Customers and groups allowed Customers and groups not allowed
Product1 C3, C5, CG1, CG6 C2, C4
Product2 C1, CG3, C5 C2, CG6
Product3 CG1, C2 C3
  • A number of items in each list is random, from 0 to 1000.
  • The customer or customer groups IDs are random, from 0 to 10000.
  • Product name and code are random and unique
  • Product IDs is random and unique
The element of the dataset script:

GeSHi Error: GeSHi could not find the language json (using path /var/www/html/hybrismart.com/public_html/wp-content/plugins/codecolorer/lib/geshi/) (code 2)
The tests showed the following results:
Field type Solr.TextField Solr.StrField
Loading the whole dataset 1388 items/sec Out-of-memory
Loading dataset in 20000-item chunks into an empty index 1333 items/sec Initially 2000-2500 items/sec. But Out of memory!
As we expected, the initial processing might be slightly faster for docValue but it consumes a significantly higher amount of memory and even ended up with an out-of-memory exception.

‘Multivalued’ vs space-delimited field

The next challenge is how to represent a list of user/user group IDs. As Apache SOLR documentation says, there are two major options: For ‘Multivalued’ option, the list is represented in an explicit way, as a list of string items. No need for tokenizers, because each item contains an ID and nothing more. For the Text option, during indexing Solr splits the list-as-a-string into a list of terms using a tokenizer(s) and then it goes through the configured filters such as stemmers, duplication processors, etc.). After that, the list of terms is converted into a number of inverted indexes (or column-oriented indexes depends on field type configuration). In fact, in the second option SOLR naturally transforms one string value into a list of string value on behalf of the application logic. Based on the fact that SOLR Apache Lucene under the hood and the fact that Lucene doesn’t support multivalued (but allows to use multiple fields with the same name), there was an assumption that the multivalued approach is slower. However, with the use of multivalued type, you can add or remove a single user/user group ID in ACL field without having others listed in the SOLR update request (see Atomic Updates). Of course, it doesn’t mean that SOLR re-indexes only the new values. SOLR needs to update its index for the whole field. We had an assumption that using Atomic Updates would help us to simplify the update operation. It is a matter of convenience, not performance. It is noteworthy to point out that Atomic Update can be still applied to TextField type too. When the whole list (a field value) is provided, there is a way of reindexing a single field rather than the whole document (see Updating parts of documents).
Load and Update tests
This test was aimed to put the arguments above to the test. Additionally, we added a new field type Solr.StrField, with multivalued on and docValues configuration disabled. The setup was the same as in the previous experiment.
Field type Space-delimited Solr.TextField Multivalued Solr.TextField Multivalued Solr.StrField (docValues= false)
Loading the whole dataset 1388 items/sec Out-of-memory 980 items/sec
Loading dataset in 20000-item chunks into an empty index 1333 items/sec 500 items/sec 1111 items/sec
Atomic update: Removing items from the list, only even groups are removed (~50%) N/A 444 items/sec. 645-700 items/sec
Atomic update: adding one item to the list N/A 606 items / sec 1333 items/sec
Atomic update: removing one item from the list N/A 476 items / sec 1333 items/sec
Atomic update: replacing the list with the shorter version (50% shorter, removed all even items) 1300 items/sec N/A 1333 items/sec
As results show, replacing the whole ACL field takes roughly the same time as adding or removing items with use of Atomic Updates functionality. The ‘multivalued’ option is also two times slower for the initial data load. We saw no clear benefits of multivalued type against the comma-separated texts. The “convenience factor” seems to be the only reason to use multivalued, but the performance was more important to us.

Optimizing SOLR and OOTB Search Server Configuration

In the previous test, we used the OOTB configuration, which is expectedly not optimized for such volumes and data model. From the whole list of properties on Field type definitions and properties page the following ones deserve attention:
  • Stored. “If true, the actual value of the field can be retrieved by queries.”
If this parameter is set to true, you will be able to retrieve the field data when you search. “Stored” field values are available for display or return with the Solr response while “Not stored” exist only as a term. Changing to “not stored” decreases memory consumption and I/O operations both during indexing and query time. However, troubleshooting and debugging will be more challenging.
  • OmitTermFreqAndPositions. “If true, omits term frequency, positions, and payloads from postings for this field. This can be a performance boost for fields that don’t require that information. It also reduces the storage space required for the index. Queries that rely on a position that is issued on a field with this option will silently fail to find documents.”.
  • omitNorms. “If true, omits the norms associated with this field (this disables length normalization for the field, and saves some memory). Defaults to true for all primitive (non-analyzed) field types, such as int, float, data, bool, and string. Only full-text fields or fields need norms.”
Since we need just to check a specific user or user group ID against the ACL lists, there is no need to store frequency and position. This should also improve memory consumption as well as indexing time. Additionally, the index pipeline should be simplified as much as possible. This can be done by specifying analyzers in the type definition. If a user or user groups is specified as an item in the a whitespace-delimited string, you need only one tokenizer to split the string into parts, WhiteSpaceTokenizer. Based on the ideas above, we came up with the following SOLR field type configuration:
<fieldType name="acl_text" class="solr.TextField" positionIncrementGap="0" omitNorms="true" omitTermFreqAndPositions="true">

<analyzer>

<tokenizer class="solr.WhitespaceTokenizerFactory"/>

<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>

</analyzer>

</fieldType>

…

<dynamicField name="*_acl_nonstored" type="acl_text" indexed="true" stored="false"/>

<dynamicField name="*_acl" type="acl_text" indexed="true" stored="true"/>
Load tests
This time, the tests were run in AWS (r5x.large, 4 vCPU, 32Gb) with use of SolrJ library. SolrJ provides multi-threaded data load and uses CPU more efficiently than the simple curl-based loader. This time we use a dockerized Apache SOLR 7.2.1, with xmxm=2Gb. The enhanced field type was added to the SAP Commerce’s out of the box solr configuration. The size of the generated JSON file with 1M products and upto 3,000 ACL elements was about 8Gb.
Dataset – 1M products and l_text_en Single thread – curl _text_en solrj, 4 threads _acl solrj, 4 threads
500 acl elements 42 min 31 min 5.3 min
1000 acls 86 min 57 min 6.5 min
2000 acls 163 min 111 min 9 min
3000 acls 252 min 168 min 12 min
The results showed that
  • The enhancements help a lot. The load time is 20x faster with the non-stored _acl fields.
  • Multi-threaded data load is 1.5x faster than single-threaded.
  • The changes in the field type configuration increases with the number of terms (ACL elements) in the documents is, but this increase is the smallest for the optimized field type

Where to store ACL

There are two major options on where to store the ACLs:
  1. As a product attribute, in the main core/collection
  2. As an attribute of the separate SOLR core/collection
The table below provides major Pros and Cons for each option: Within product core/collection. The design is simpler. Fewer moving parts and points of failure: one SOLR core/collection, less backup/restore work, less work to init/update SAP Commerce website through hAC etc. However, this option will potentially slow down indexing. The size of each document in Solr is roughly the sum of the sizes of all attribute values. ACL lists are supposed to be very big. The processing time is longer for large documents. Even if we use Atomic Update to isolate the operations with ACL fields from the operations with the product data, as the tests showed, such design will have a significant impact on performance. Additionally, the update processes for products and visibility rules are logically separated, that may occur inconsistent state of the core/collection. Some product updates can be applied earlier than the corresponding ACLs for the product are applied. Separate SOLR core/collection. This option is potentially faster. The tests show that the updates of the relatively small and designated ACL-only SOLR core is much faster than updating of a all-in-one SOLR core. Core swapping can help with addressing the challenge with keeping in sync the changes of ACL and product data. However, if we go with Solr Cloud, the ACL collection will be replicated across all product collection shards that may create issues with cross-shard joins (see Joining across collections). There is a workaround, Colocated Collections, but the size of joined collection and available network bandwidth may create additional challenges.

Load/Update Tests and Results

The experiment below is done on SOLR running in standalone mode in AWS with same parameters as it is stated in the previous experiment. The numbers below are in minutes:
Single ACL Core solrj, 4 threads Separate ACL core solrj, 4 threads
Dataset – 1M products and.. Initial load *_acl Update *_acl Initial Load *_acl Update *_acl
500 retailer groups 5.3 min 7.4 min 3 min 4 min
1000 r.g. 6.5 min 9 min 5 min 6 min
2000 r.g. 9 min 12 min 8 min 8.5 min
3000 r.g. 12 15 9.5 13
Conclusions:
  • Moving ACL data to a separate core leads to better performance. The increase is ranged from 76% to 25%, depends on the size of ACL record.
  • Expectedly, the update operation takes ~30% more time than initial load

Testing all in one

The goal of the final test is to measure the maximum processing speed / minimum processing time for the combined approach:
  • A custom SOLR field type (not stored, with omitNorms and omitTermFreqAndPositions on)
  • A separate SOLR core for ACLs
  • Multi-thread processing
The number of users and user groups is 30,000. The expected size of the JSON file for 30K groups is ~50Gb, the test was conducted using direct load through SolrJ.
Dataset vCPU x 4 stored=”false” solrj, 4 threads vCPU x 16 stored=”false” solrj, 16 threads
1M products with 30K retailer groups 83 min 17 min
Using the approach we are able to achieve throughput upto~50Mb/s(3.125 Mb/s/thread). In this solution, SOLR doesn’t store the ACL: it updates the inverted index only. The dictionary (all possible terms for the field) is limited to 30,000 unique terms, where each term is a customer or customer group ID. It also saves a lot of memory and CPU resources.

Results

Finally, to sum up our findings, we may conclude that
  • The default SAP Commerce/SOLR configuration is good for the typical/standard tasks and show good performance for querying.
  • Understanding SOLR out-of-the-box capabilities and SOLR tuning options are critical if it comes to the huge volumes, indexing-intensive operations, and non-standard search logic. For such cases, SOLR tuning is a must.
  • In terms of indexing and updating, multivalued field types are generally slower than the normal text fields equipped with a tokenizer.
  • Atomic Updates changing only one or a few values in the ‘multivalued’ field (=array) take about the same time as Atomic Updates replacing the whole value for a normal text field.
  • A separate, dedicated SOLR core/collection for ACL storage can speed up data load up to 76%. However, it creates additional challenges (see the details in the appropriate section).
  • Using non-stored ACL fields helps to boost the data load significantly. However, it creates new challenges (see the details in the appropriate section).
  • Making the indexing process multi-threaded is an efficient way of performance tuning.
  • All the changes applied together gives great results. If the data were represented as a JSON file, its size would about 50Gb, and it would take about 17 minutes to load it into SOLR.

What and how to change SAP Commerce

Let’s have a look at the adjustments need to be done on the SAP Commerce to use ACLs. Here is where we are:
  1. ACL is a single field of the “*_acl” type
  2. ACL is stored in a separate SOLR core/collection as a whitespace-separated list of customers or customer groups a product is accessible or not accessible to.
  3. The groups are organized into hierarchy
What changes we need to make on the Commerce Cloud side? The hierarchical customer and product groups are supported natively via the out-of-the-box user group hierarchy and product categories. When you search or open a product category, SAP Commerce Cloud makes a request to Solr. The Solr query is constructed based on user input (the search query, facets, and sorting parameters) and search configuration (grouping, boosting etc.)./ Solr provides fq (Filter Query Parameter), a feature which allows terms to be matched in a binary manner without affecting the document’s score. Moving constraints from the main query (q) to (fq) can significantly speed up search performance for complex queries if such move is in agreement with the (fq) constraints. The (fq) and (q) queries are cached independently. Solr executes each (fq) query independently against the entire index, and intersects the results from the main query with the individual filters (which are cached). In future requests, if the main (q) query changes, and may potentially match a different set of values/documents, the cached filter queries can still be re-used to limit the set of documents the main query has to check. The filter query can be efficiently used for ACL filtering. As we concluded above, the ACL rules can be stored in the separate core/collection. In order to use them in the query along with the product-specific filters, we need to use two cores in the same query which is achieved through Join Query Parser. The conceptual flow is presented below: SAP Commerce Search API documentation recommends using a FacetSearchListener to make changes to the way the platform interacts with the Solr server:
public class DefaultAclGroupFacetSearchListener implements FacetSearchListener {

private static final String FILTER_QUERY_FIELD = "{!join from=%s fromIndex=%s to=%s}%s";

@Autowired

private UserService userService;

@Override

public void beforeSearch(FacetSearchContext facetSearchContext) throws FacetSearchException {

facetSearchContext.getSearchQuery().addFilterQuery(this.getFilterQuery());

}

@Override

public void afterSearch(FacetSearchContext facetSearchContext) throws FacetSearchException {

// Handling notifications is not expected

}

@Override

public void afterSearchError(FacetSearchContext facetSearchContext) throws FacetSearchException {

// Handling notifications is not expected

}

private String[] getGroupsUid() {

return userService.getCurrentUser().getGroups().stream()

.map(PrincipalGroupModel::getUid)

.map(String::toLowerCase)

.toArray(String[]::new);

}

private QueryField getFilterQuery() {

String[] value = getGroupsUid();

String field = String.format(FILTER_QUERY_FIELD, “id”, “acl_core”, “id”, “allowed_groups_acl”);

return new QueryField(field, SearchQuery.Operator.OR, value);

}

}

Alternative solutions, Known Limitations and Challenges

ACL Data outside Solr

Our solution works well if there is direct access to Solr and we are able to create a new core and load ACL data to it in the multithreaded manner. However, some restrictions can interfere with these plans. For example, Solr service is provided as search-as-a-service. The most probably you won’t be able to create a separate core and use join requests for such setup. As an alternative to the Solr-based solution, there is an alternative option. Redis DB can be also used as a storage for ACL. The alternative solution is based on Redis DB. This NoSQL database is known as the ultrafast data store. Redis demonstrates the best results in manipulating of simple data structures, such as lists and sets. The benchmarks show that Redis is capable to run more than 100k set requests per second on Intel(R) Xeon(R) CPU E5520 @ 2.27GH. We ran load tests against our data (AWS, r5x.large for the Redis server):
Measure Time (s)
Loading 1.5M product visibility rules ~100
Checking visibility for random groups for 1000 random products 0.1
Checking visibility for random groups for 10000 random products 1
For a loading test, we used AWS, 2 clients, 4 threads, c5.large for each. According to this approach, the application server makes requests to the Redis-based service to check if the product is available for a customer. For long lists of products, this logic is integrated into the post-processing phase to process the products returned by Solr. The efficiency of that depends on what portion of data shouldn’t be displayed for a particular customer and how such products are distributed in the set returned by Solr. If the majority of products is available for a customer, the approach of making requests for each product in the returned set can be considered as a viable option. With this solution, the search process consists of the following stages:
  1. Executing text search in SOLR. At this phase, the ACLs are not taken into account, and the list of products returned will contain items the customer can’t access to, which is part of the design. These items will be filtered out at the next phases.
  2. Filtering out the items the customer has no access to. For that, the system performs checks for each item from the returned set using the Redis-based API until the customer-facing list is not complete. Since the results are normally delivered paginated, the size of the customer-facing list is limited to page size.
This approach has its pros and cons. Among the advantages, the following can be highlighted:
  • No need to customize Solr or add a new core. This point is important if the cloud environment has customization limitations, such as ones Azure-based SAP Commerce Cloud has.
  • Initial data load is fast.
However, this solution has a number of serious shortcomings:
  • Slower search. Total execution time is a sum of Solr query time, Redis query time and post-processing (intersection of the sets).
  • It may involve a significant overhead (memory and CPU) because post-processing is performed on the application node for each customer query or product listing request.
  • Depends on distribution. The post-processing will take more time and resources (CPU, memory) If the majority of products are not visible/accessible to the majority of users.
  • Facets won’t work properly because the hidden products are involved in the facet calculation. If such products are removed from the result set, the facets won’t be valid.
These disadvantages won’t be too major if the products available for a customer constitute the overwhelming majority of the products indexed in the Solr engine. Even for the 50/50 case, the solution may not be optimal. The worst scenario is that the majority of products are not supposed to be accessible for a customer. It worth mentioning that post-filtering can be implemented inside Solr as well. This method is described here: https://lucidworks.com/2012/02/22/custom-security-filtering-in-solr/.

Making it scalable

In our task definition, we had no more than 30k user groups participating in ACL fields. How to build the system if the bar is 10 times higher? What if the rules change frequently? If we go with the Solr solution, we’ll come up with the huge Solr index. If we go with the Redis-based solution, the visibility calculation process will be slower per item, and, consequently, much slower for batch processing. The scalable solution involves the use of both Solr and Redis components. In order to make the ACLs shorter and speed up indexing, you can use hashes instead of groups and perform the calculation in two phases. Instead of storing the customer or customer group id in the ACL field, you can store the hashed/group value in the index. The hash/grouping function should be designed to reduce the number of unique elements in Solr. At the query phase, the results should be post-processed to filter out irrelevant items. For each item from the result set given by Solr, the system requests Redis for the final decision. When the customer receives a sufficient number of products, the post-processed result set is delivered to the customer. For the logged in user, the customer id and group ids are known. To create a search result list or list of products for the category, the system calculates the hashes from the customer id and all customer groups. These IDs will be used for filtering in the Solr Search filtering query. The total number of hashed IDs is smaller than the total number of customer and group IDs.

Conclusion

In this article, we presented the solutions to support product whitelisting/blacklisting per customer in SAP Commerce Cloud for large product sets and large customer base, namely:
  • The Solr-based solution where ACL (list of customers or customer groups) are stored in the Solr index.
  • The Redis-based solution where ACL is stored in Redis and requested by Solr for each Solr item.
  • The combined solution, where Solr ACL items are hashed ACL items, and Redis is used for post-processing to filter out the irrelevant items.
The tests showed that the solution is capable to process millions of documents, tens of thousands of customers and millions of access rules.

Authors

The design, prototypes, and implementation have been led by a team of solution architects and developers from EPAM USA and EPAM RU offices. Special thanks to our colleagues from EPAM Saint-Petersburg, Russia, who provided insight and expertise that greatly assisted the research.

Leave a Reply