2017-08-03 71 views
0

我是新增的Hibernate搜索。我們決定用用戶的冬眠搜索我的應用程序。我們選擇jgroups作爲後端。這是我的配置文件。Hibernate搜索+ infinispan + jgroups後端從屬鎖問題

<?xml version="1.0" encoding="UTF-8"?> 
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="urn:infinispan:config:7.0 
http://www.infinispan.org/schemas/infinispan-config-7.0.xsd 
        urn:infinispan:config:store:jdbc:7.0 http://www.infinispan.org/schemas/infinispan-cachestore-jdbc-config-7.0.xsd" 
xmlns="urn:infinispan:config:7.0" 
xmlns:jdbc="urn:infinispan:config:store:jdbc:7.0"> 

<!-- *************************** --> 
<!-- System-wide global settings --> 
<!-- *************************** --> 
<jgroups> 
    <!-- Note that the JGroups transport uses sensible defaults if no configuration 
     property is defined. See the JGroupsTransport javadocs for more flags. 
     jgroups-udp.xml is the default stack bundled in the Infinispan core jar: integration 
     and tuning are tested by Infinispan. --> 
    <stack-file name="default-jgroups-tcp" path="proform-jgroups.xml" /> 
</jgroups> 

<cache-container name="HibernateSearch" default-cache="default" statistics="false" shutdown-hook="DONT_REGISTER"> 

    <transport stack="default-jgroups-tcp" cluster="venkatcluster"/> 

    <!-- Duplicate domains are allowed so that multiple deployments with default configuration 
     of Hibernate Search applications work - if possible it would be better to use JNDI to share 
     the CacheManager across applications --> 
    <jmx duplicate-domains="true" /> 

    <!-- *************************************** --> 
    <!-- Cache to store Lucene's file metadata --> 
    <!-- *************************************** --> 
    <replicated-cache name="LuceneIndexesMetadata" mode="SYNC" remote-timeout="25000"> 
     <transaction mode="NONE"/> 
     <state-transfer enabled="true" timeout="480000" await-initial-transfer="true" /> 
     <indexing index="NONE" /> 
     <eviction max-entries="-1" strategy="NONE"/> 
     <expiration max-idle="-1"/> 
     <persistence passivation="false"> 
      <jdbc:string-keyed-jdbc-store preload="true" fetch-state="true" read-only="false" purge="false"> 
       <property name="key2StringMapper">org.infinispan.lucene.LuceneKey2StringMapper</property> 
       <jdbc:connection-pool connection-url="jdbc:mysql://localhost:3306/entityindex" driver="com.mysql.jdbc.Driver" password="pf_user1!" username="pf_user"></jdbc:connection-pool> 
       <jdbc:string-keyed-table drop-on-exit="false" create-on-start="true" prefix="ISPN_STRING_TABLE"> 
        <jdbc:id-column name="ID" type="VARCHAR(255)"/> 
        <jdbc:data-column name="DATA" type="BLOB"/> 
        <jdbc:timestamp-column name="TIMESTAMP" type="BIGINT"/> 
       </jdbc:string-keyed-table> 
      </jdbc:string-keyed-jdbc-store> 
     </persistence> 
    </replicated-cache> 

    <!-- **************************** --> 
    <!-- Cache to store Lucene data --> 
    <!-- **************************** --> 
    <distributed-cache name="LuceneIndexesData" mode="SYNC" remote-timeout="25000"> 
     <transaction mode="NONE"/> 
     <state-transfer enabled="true" timeout="480000" await-initial-transfer="true" /> 
     <indexing index="NONE" /> 
     <eviction max-entries="-1" strategy="NONE"/> 
     <expiration max-idle="-1"/> 
     <persistence passivation="false"> 
      <jdbc:string-keyed-jdbc-store preload="true" fetch-state="true" read-only="false" purge="false"> 
       <property name="key2StringMapper">org.infinispan.lucene.LuceneKey2StringMapper</property> 
       <jdbc:connection-pool connection-url="jdbc:mysql://localhost:3306/entityindex" driver="com.mysql.jdbc.Driver" password="pf_user1!" username="pf_user"></jdbc:connection-pool> 
       <jdbc:string-keyed-table drop-on-exit="false" create-on-start="true" prefix="ISPN_STRING_TABLE"> 
        <jdbc:id-column name="ID" type="VARCHAR(255)"/> 
        <jdbc:data-column name="DATA" type="BLOB"/> 
        <jdbc:timestamp-column name="TIMESTAMP" type="BIGINT"/> 
       </jdbc:string-keyed-table> 
      </jdbc:string-keyed-jdbc-store> 
     </persistence> 
    </distributed-cache> 

    <!-- ***************************** --> 
    <!-- Cache to store Lucene locks --> 
    <!-- ***************************** --> 
    <replicated-cache name="LuceneIndexesLocking" mode="SYNC" remote-timeout="25000"> 
     <transaction mode="NONE"/> 
     <state-transfer enabled="true" timeout="480000" await-initial-transfer="true" /> 
     <indexing index="NONE" /> 
     <eviction max-entries="-1" strategy="NONE"/> 
     <expiration max-idle="-1"/> 
     <persistence passivation="false"> 
      <jdbc:string-keyed-jdbc-store preload="true" fetch-state="true" read-only="false" purge="false"> 
       <property name="key2StringMapper">org.infinispan.lucene.LuceneKey2StringMapper</property> 
       <jdbc:connection-pool connection-url="jdbc:mysql://localhost:3306/entityindex" driver="com.mysql.jdbc.Driver" password="pf_user1!" username="pf_user"></jdbc:connection-pool> 
       <jdbc:string-keyed-table drop-on-exit="false" create-on-start="true" prefix="ISPN_STRING_TABLE"> 
        <jdbc:id-column name="ID" type="VARCHAR(255)"/> 
        <jdbc:data-column name="DATA" type="BLOB"/> 
        <jdbc:timestamp-column name="TIMESTAMP" type="BIGINT"/> 
       </jdbc:string-keyed-table> 
      </jdbc:string-keyed-jdbc-store> 
     </persistence> 
    </replicated-cache> 

</cache-container> 

這是我的JGroups文件:

<config xmlns="urn:org:jgroups" 
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    xsi:schemaLocation="urn:org:jgroups 
    http://www.jgroups.org/schema/JGroups-3.6.xsd"> 
    <TCP bind_addr="${jgroups.tcp.address:127.0.0.1}" 
    bind_port="${jgroups.tcp.port:7801}" 
    enable_diagnostics="false" 
    thread_naming_pattern="pl" 
    send_buf_size="640k" 
    sock_conn_timeout="300" 

    thread_pool.min_threads="${jgroups.thread_pool.min_threads:2}" 
    thread_pool.max_threads="${jgroups.thread_pool.max_threads:30}" 
    thread_pool.keep_alive_time="60000" 
    thread_pool.queue_enabled="false" 
    internal_thread_pool.min_threads= 
    "${jgroups.internal_thread_pool.min_threads:5}" 


    internal_thread_pool.max_threads= 
    "${jgroups.internal_thread_pool.max_threads:20}" 
    internal_thread_pool.keep_alive_time="60000" 
    internal_thread_pool.queue_enabled="true" 
    internal_thread_pool.queue_max_size="500" 

    oob_thread_pool.min_threads="${jgroups.oob_thread_pool.min_threads:20}" 
    oob_thread_pool.max_threads="${jgroups.oob_thread_pool.max_threads:200}" 
    oob_thread_pool.keep_alive_time="60000" 
    oob_thread_pool.queue_enabled="false" 
    /> 
    <S3_PING access_key="" 
     secret_access_key="" 
     location="mybucket" 

/> 
    <MERGE3 min_interval="10000" 
     max_interval="30000" 
    /> 
<FD_SOCK /> 
<FD_ALL timeout="60000" 
     interval="15000" 
     timeout_check_interval="5000" 
/> 
    <VERIFY_SUSPECT timeout="5000" /> 
<pbcast.NAKACK2 use_mcast_xmit="false" 
       xmit_interval="1000" 
       xmit_table_num_rows="50" 
       xmit_table_msgs_per_row="1024" 
       xmit_table_max_compaction_time="30000" 
       max_msg_batch_size="100" 
       resend_last_seqno="true" 
/> 
<UNICAST3 xmit_interval="500" 
     xmit_table_num_rows="50" 
     xmit_table_msgs_per_row="1024" 
     xmit_table_max_compaction_time="30000" 
     max_msg_batch_size="100" 
     conn_expiry_timeout="0" 
/> 
<pbcast.STABLE stability_delay="500" 
       desired_avg_gossip="5000" 
       max_bytes="1M" 
/> 
<pbcast.GMS print_local_addr="false" 
      join_timeout="15000" 
/> 
<MFC max_credits="2m" 
    min_threshold="0.40" 
/> 
<FRAG2 /> 
</config> 

這是我的嵌入式TCP文件: -

<config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
    xmlns="urn:org:jgroups" 
    xsi:schemaLocation="urn:org:jgroups 
http://www.jgroups.org/schema/jgroups.xsd"> 
<TCP bind_port="7801"/> 
<S3_PING access_key="" 
     secret_access_key="" 
     location="" 

/> 
<MERGE3/> 
<FD_SOCK/> 
<FD/> 
<VERIFY_SUSPECT/> 
<pbcast.NAKACK2 use_mcast_xmit="false"/> 
<UNICAST3/> 
<pbcast.STABLE/> 
<pbcast.GMS/> 
<MFC/> 
<FRAG2/> 
<pbcast.STATE_TRANSFER/> 
<pbcast.FLUSH timeout="0"/> 
</config> 

這些休眠設置:

propertyMap.put("hibernate.search.default.directory_provider", 
"infinispan"); 
propertyMap.put("hibernate.search.lucene_version", 
KeywordUtil.LUCENE_4_10_4); 
propertyMap.put("hibernate.search.infinispan.configuration_resourcename", 
"hibernate-search-infinispan-config.xml"); 
propertyMap.put("hibernate.search.default.​worker.execution","sync"); 
propertyMap.put("hibernate.search.default.​worker.backend","jgroups"); 
propertyMap.put("hibernate.search.services.jgroups.configurationFile", 
"flush-tcp.xml"); 
propertyMap.put("hibernate.search.default.exclusive_index_use","true"); 

最初我們用一個具有上述配置的節點啓動集羣。取決於負載,我們將節點添加到集羣。這是我們的架構。 假設10-00 AM我們開始了集羣。只有節點將成爲主節點。和everthing很好。 10-10我們在集羣中添加了一個更多的節點,並稍作修改。這裏是變化

propertyMap.put("hibernate.search.default.exclusive_index_use","false"); 

我通過第二個節點創建了一個索引。然後出現鎖定錯誤。這是錯誤。

org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: 
[email protected] 

問題:理論上第二個節點應該成爲奴隸,它永遠不應該獲得索引的鎖定。它應該指示主節點通過jgroups通道創建索引。但沒有發生。你們中的一個人可以幫我解決這個問題嗎?我們的生產系統存在問題。請幫助我。

回答

0

問題:理論上第二個節點應該成爲奴隸,並且它應該從來沒有獲得對索引的鎖定。它應該指示主節點通過jgroups通道創建索引 。

這裏可能有兩個問題。

1.使用不同的值exclusive_index_use

也許別人可以確認,但除非你的新節點只能與一個完全獨立的持久單元處理與完全不同的指標,我懷疑這是用不同的值是個好主意對於不同節點上的exclusive_index_use

exclusive_index_use不是關於不獲取鎖,而是關於儘快釋放它們(每次更改後)。如果其他節點在獨佔節點中工作,它們將永遠不會釋放鎖,並且新節點將超時等待鎖。

另請注意,禁用exclusive_index_use是降低寫入性能的一種可靠方法,因爲它需要不斷關閉和打開索引編寫器。謹慎使用。

最後,正如您所指出的那樣,在任何給定時間只有一個節點應該寫入索引(JGroups master),因此在您的情況下不應該需要禁用exclusive_index_use。必須有另一個問題...

2。主/從選

如果我沒有記錯,默認的主/節點選舉策略會在添加新節點時選擇新的主節點。此外,我們在最新的Hibernate Search版本中修復了一些與動態主選舉相關的錯誤(尚未發佈),因此您可能會受其中一個錯誤的影響。

您可以嘗試在第一個節點上使用jgroupsMaster後端,在第二個節點上使用jgroupsSlave。再也不會有自動大選,所以當主節點出現故障時,您將失去維護服務的能力,但從我所瞭解的您的主要擔心是擴大規模,因此它可能會給您一個臨時解決方案。

主節點:

propertyMap.put("hibernate.search.default.​worker.backend","jgroupsMaster"); 

在從屬節點:

propertyMap.put("hibernate.search.default.​worker.backend","jgroupsSlave"); 

警告:您將需要一個完整的重新啓動!保持當前jgroups後端在您的主人,而添加另一個節點與jgroupsSlave後端導致麻煩!

您可能還需要對您的Infinispan目錄進行一些配置更改,但我對此目錄並不熟悉。你可以檢查文檔:https://docs.jboss.org/hibernate/stable/search/reference/en-US/html_single/#jgroups-backend

+0

我試過你的第二個建議。即使有那個主奴隸的條件,我正在鎖。我不確定鎖的原因是什麼。請讓我知道你的更多想法。 – user1273969