IBM WebSphere Portal 8.5: User assistance for administrators

Manage Search

Use the Manage Search portlet to administer portal search.

To manage Portal Search, click the Administration menu icon in the toolbar. Then, click Portal User Interface > Manage Search from the portal menu. The portal displays the administration portlet Manage Search.

Note: This portlet help gives instructions for using the Manage Search portlet only. For more information about search services, collections and scopes, planning considerations and how to configure search in your portal, see the WebSphere Portal Information Center > Portal Search.

Search Services

Search Services allows you to view and manage the WebSphere Portal search services. Search Services represent separate instances of the search engine that is provided by WebSphere Portal that can be used for searching content by using the Search Center. When you create a search collection, you must select a search service. That search service is used to do searches that users request on that collection. A search service can be used for searching multiple search collections. You can set parameters to configure a portal search service. This allows you to set up separate instances of search services with different configurations. You can also set up multiple portal search services and distribute the search load over several nodes. The following Search Service is provided by WebSphere Portal by default:
Portal Search Service
Select the Portal Search Service to manage search collections that contain portal pages, content that is managed by Web Content Management, or indexed web pages. For a cluster portal environment, you need to set up a remote search service. For more information, refer to the Portal Search documentation in the WebSphere Portal Information Center.
Note: The HTTP crawler of the Portal Search Service does not support JavaScript. Text that is generated by JavaScript might not be available for search.

You can also create additional custom search services and add them to your portal.

Creating a new search service

To create a new search service, click the New Search Service button. Manage Search displays the New Search Service page. When you specify a Service name, make sure that the name is unique within the current portal or virtual portal.

Search Collections and content sources

Search Collections allows you to view and manage the search collections and their content sources in the portal. You can build and maintain search collections of web content, Web Content Management Content, and portal content, and the related search collections. Users can then search these collections by using the portal Search Center.

A search collection can have one or more content sources with content such as web pages, Web Content Management content, or portal pages and portlets.

The portal default search collection combines two content sources and their related crawlers:
  • The Portal Content Source contains the local portal site, where users can search for portal pages and portlets.
  • The Web Content Manager (WCM) Content Source, which users can search for web content.

During, the search collection build process, content is retrieved for indexing through a crawler (robot) from the content sources. The search collection stores keywords and metadata, and maps them to their original source. It allows fast processing of requests from the Search Center portlet.

Searchable resources can be stored on the local portal server or on remote content sources. Content can be processed by the crawlers, if it is accessible through the HTTP protocol. For example, this can be portal pages, Web Content Management content, and documents and content that is hosted by web servers. The documents can be of different types, for example, editable text files, office suite documents, such as Microsoft and OpenOffice, or PDF files.

Managing Search Collections

From the Search Collections panel, select the following options or icons and do the following tasks on search collections:
  • Refresh. Select this option to refresh the list of search collections. This action updates the information and the available option icons for the collections. Examples:
    • If a crawl is running or was completed, the number of documents is updated.
    • If a crawl was completed on a collection since the last refresh, option icons can appear, such as Search and Browse the Collection.
    • If another administrator also worked on search collections at the same time, the information is updated .
  • From the Search Collections page, you can import and export search collections. You can also view the status of the search collection and manage the content sources of a search collection by clicking the search collection name.
    Note: The icons for some tasks are only available if the current user can do the specific task on the search collection.

Creating a search collection

Some of the following entry fields and options that are available when you create a search collection are as follows:

Note: The parameters that you select when you create the search collection cannot be changed later. Therefore, plan ahead and apply special care when you create a new search collection. If you want to change parameters for a search collection, you must create a new search collection and select the required parameters for it. You can then export the data from the old collection and import it into the new collection. For more information, refer to Exporting a search collection and Importing a search collection.

Viewing the status of a search collection

To view the status of the search collection, click the collection name in the list of search collections. Manage Search shows the Content Sources and the Search collection status information of the selected search collection. The status fields show data that changes over the lifetime of the search collection. Some data that displays is as follows:
Last update completed:
Shows the date when a content source defined for the search collection was last updated by a scheduled crawl and indexed.
Note: The timeout that you might set under Stop collecting after (minutes): works as an approximate time limit. It might be exceeded by some percentage, as indexing the documents after the crawl takes more time. Therefore, allow some tolerance.

If you have a faulty search collection in your portal, the portlet shows a link that takes you to that faulty collection.

Migrating search collections

Notes:

When you upgrade to a higher version of WebSphere Portal, the data storage format is not necessarily compatible with the older version. To prevent loss of data, export all data of search collections to XML files before you upgrade. After the upgrade, you create a new search collection and use the previously exported data to import the search collection data back into your upgraded portal.

  1. If you do not do these steps, the search collections are lost after you upgrade your WebSphere Portal.
  2. When you create the search collection on the upgraded portal, type data and make selections as follows:
    • Fill the location, the name, and the description of the new collection in as required. You can match the old settings or type new ones.
    • You do not need to select a summarizer. These settings are overwritten by the settings when you import the data from the source search collection.
  3. You cannot migrate a portal site collection between different versions of WebSphere Portal. If you upgrade your portal from one version to another, you need to re-create the portal site collection. Proceed as follows:
    1. Document the configuration data of your portal site content source.
    2. Delete the existing portal content source.
    3. Upgrade your portal.
    4. On the upgraded portal, create a new portal site content source. Use the documented configuration data as required.
    5. Run the new portal content source.

Portlets that were crawled in the portal before the upgrade, but do not exist in the upgraded portal, are not returned by a search.

For more information about these tasks, see the topics about migrating, importing, and exporting search collections in the portal Information Center.

For details about how to export and import search collections, refer to Exporting a search collection and Importing a search collection.

Exporting a search collection

To export a search collection and its data, proceed as follows:
  1. Before you export a collection, make sure that the portal application process has write access to the target directory location. Otherwise, you might get an error message, such as File not found.
  2. Make sure that the target directory is empty or contains no files that you still need, as the export can overwrite files in that directory.
  3. Locate the search collection that you want to export.
  4. Click the Import or Export Collection icon next to the search collection in the list. Manage Search displays the Import and Export Search Collection panel.
  5. In the entry field Specify Location (full path with XML extension): type the full directory path and XML file name to which you want to export the search collection and its data. Document the names of the collections and the directory locations and target file names to which you export the collections for the import that follows.
    Note: When you specify the target directory location for the export, be aware that the export can overwrite files in that directory.
  6. Click Export to export the search collection data. Manage Search writes the complete search collection data to an XML file and stores it in the directory location that you specified. You can use this file later as the source of an import operation to import the search collection into another portal.
  7. To return to the previous panel without exporting the search collection, click the appropriate link in the breadcrumb trail.

Importing a search collection

To import the data of a search collection, proceed as follows:
  1. Before you can import the collection data, you need to create the empty shell for the search collection. You can create the empty shell by creating a search collection. You need to enter only the mandatory data entry field Location of Collection. Do not add content sources or documents, as that is completed by the import.
  2. On the search collection list, locate the search collection into which you want to import the search collection data.
  3. Click the Import or Export icon next to the search collection in the list. Manage Search displays the Import and Export Search Collection panel.
  4. In the entry field Specify Location (full path with XML extension):, type the full directory path and XML file name of the search collection data, which you want to import into the selected search collection.
  5. Click Import to import the search collection data. Manage Search imports the complete search collection data from the specified XML file into the selected search collection.
  6. To return to the previous panel without importing a search collection, click the appropriate link in the breadcrumb trail.
  7. If required, you can now add content sources and documents to the search collection.
Note: When you import a collection, be aware of the following:
  1. Import collection data only into an empty collection. Do not import collection data into a target collection that has content sources or documents already.
  2. When you import collection data into a collection, all collection settings are overwritten by possibly imported settings. For example, the language setting is overwritten, or a summarizer is added, if it was specified for the imported search collection.
  3. When you import a collection, a background process fetches, crawls, and indexes all documents that are listed by URL in the previously exported file. This process is asynchronous. It can therefore take considerable time until the documents become available.
  4. When you import a collection that contains a portal site content source that was created in a previous version of WebSphere Portal, you need to regather the portal content. You can regather the content by deleting the existing portal site content source, creating a new portal site content source, and starting a crawl on it.

Refreshing collection data

Refreshing the data of a search collection updates that collection by renewed crawling of all the content sources that are associated with it. To refresh a search collection, click the icon Regather documents from Content Source for that collection. Manage Search does complete new crawls over all its content sources. To verify progress and completion of the regathering, click the collection and view the Collection Status information.
Note: This action might require a considerable amount of system resources, as all content sources of the search collection are crawled at the same time.

Deleting a search collection

Note: If you delete the search collection before an upgrade to a higher version of WebSphere Portal, make sure you export the search collection for later import before you delete it. For details, refer to Migrating search collections.

Managing the content sources of a search collection

To work with the content sources of a search collection, click the collection name in the list of search collections. Manage Search lists the Content Sources and the Search collection status information of the selected search collection. A search collection can be configured to cover more than one content source.

From the Content Sources panel, you can select the following options or icons and do the following tasks on content sources:
  • Refresh. Click this icon to refresh the status information about the content source. While a crawl on the content source is running, this option updates the information about the crawl run time and the documents collected so far.
  • View the status information for the content source:
    Documents
    The number of documents in the content source. If you click the Refresh button during a crawl, this action shows how many documents the crawler fetched so far from the content source.
    Run Time
    The Run Time of the last crawler run on the content sources. If you click the Refresh button during a crawl, this action shows how much time the crawler used so far to crawl the content source.
    Last Run
    The date and time when the Last Run started by which the content source was crawled.
    Next Run
    The date and time of the Next Run by which the content source is crawled, if scheduled.
    Status
    The Status of the content source, that is, whether the content source is idle or a crawl is Running on the content source.
  • Select one of the icons for a specific content source and do one of the following tasks:
    • View Content Source Schedulers. This icon is displayed only if you defined scheduled crawls for this content source. If you click this icon, the portlet lists the scheduled crawls, together with the following information:
      • Start Date
      • Start Time
      • Repeat Interval
      • Next Run Date
      • Next Run Time
      • Status. This option can be disabled or enabled. You can click the link to toggle between enabling and disabling the scheduler.
    • Start Crawler. Click this icon to start a crawl on the content source. This action updates the contents of the content source by a new run of the crawler. While a crawl on the content source is running, the icon changes to Stop Crawler. Click this icon to stop the crawl. For details, refer to the section about Starting to collect documents from a content source. Portal Search refreshes different content sources as follows:
      • For website content sources, documents that were indexed before and still exist in the content source are updated. Documents that were indexed before, but no longer exist in the content source are retained in the search collection. Documents that are new in the content source are indexed and added to the collection.
      • For WebSphere® Portal sites, the crawl adds all pages and portlets of the portal to the content source. It deletes portlets and static pages from the content source that were removed from the portal. The crawl works similarly to the option Regather documents from Content Source.
      • For IBM® Web Content Manager sites, Portal Search uses an incremental crawling method. Additionally to added and updated content, the Seedlist explicitly specifies deleted content. In contrast, clicking Regather documents from Content Source starts a full crawl; it does not continue from the last session, and it is therefore not incremental.
      • For content sources created with the seedlist provider option, a crawl on a remote system that supports incremental crawling, such as IBM Connections, behaves like a crawl on a Web Content Manager site.
    • Regather documents from Content Source. This option deletes all existing documents in the content source from previous crawls and then starts a full crawl on the content source. Documents that were indexed before and still exist in the content source are updated. Documents that were indexed before, but no longer exist in the content source are removed from the collection. Documents that are new in the content source are indexed and added to the collection.
    • Notes:
      • It is of benefit to define a dedicated crawler user ID. The pre-configured default portal site search uses the default administrator user ID wpsadmin with the default password of that user ID for the crawler. If you changed the default administrator user ID during your portal installation, the crawler uses that default user ID. If you changed the user ID or password for the administrative user ID and still want to use that user ID for the Portal Search crawler, you need to adapt the settings .

        To define a crawler user ID, select the Security tab, and update the user ID and password. Click Save to save your updates.

      • If you modify a content source that belongs to a search scope, update the scope manually to make sure that the scope still covers that content source. Especially if you changed the name of the content source, edit the scope and make sure that it is still listed there. If not, add it again.
      • If you delete a content source, then the documents that were collected from this content source remains available for search by users under all scopes, which included the content source before it was deleted. These documents are available until their expiration time ends. You can specify this expiration time under Links expire after (days): under General Parameters when you create the content source.

Adding new content source

When you create a new content source for a search collection, that content source is crawled and the search collection is populated with documents from that content source. You can determine where the index crawls and what information it fetches. Some entry fields and parameters that you can specify are as follows:
  • Select the type of the content source that you want to create from the pull-down list:
    • Website. Select this option for all remote sites, which includes websites and remote portal sites.
      Note: Only anonymous pages can be indexed and searched on remote portal sites.
    • Seedlist provider. Select this option if the crawler uses a seedlist as the content source for the collection.
    • Portal site. Select this option if the content source is your local portal site.
    • WCM (Managed Web Content) site. To make a content source of this type available to Portal Search, you need to create it in the Web Content Management Authoring portlet. You select the appropriate option to make it searchable and specify the search collection to which it belongs. When you complete creating the Managed Web Content site, it is listed among the content sources for the search collection that you specified. For more information, see the Web Content Management documentation.

    Your selection determines some of the entry fields and options that are available for creating the content source. For example, the option Obey Robots.txt under the tab Advanced Parameters is available only if you select Website as the content source type.

  • Select the tabs to configure various types of parameters of the content source:
    1. Setting the General Parameters
    2. Setting the Advanced Parameters
    3. Configuring the Scheduler
    4. Configuring the Filters
    5. Configuring Security

Setting the general parameters for a content source

To set the general parameters for the content source, proceed by completing the entry fields and making your selections in the Create a New Content Source box. The available fields and options differ, depending on the type of content source that you select. They are listed in the following.
  • Collect documents linked from this URL: Type the required web URL or portal URL in this entry field. This action determines the root URL from which the crawler starts. This field is mandatory. For portal content sources, the value for this field is completed by Manage Search.
    Notes:
    • For websites, you need to type the full name including http://. For example: http://www.cnn.com. Typing only www.cnn.com results in an error.
    • A crawler failure can be caused by URL redirection problems. If this problem occurs, try by editing this field, for example, by changing the URL to the redirected URL.
  • Make your selection from the following options by selecting from the drop-down lists. The available fields and options differ, depending on the type of content source that you selected.
    Levels of links to follow:
    For crawling websites: This option determines the crawling depth that is the maximum number of levels of nested links, which the crawler follows from the root URL while it crawls.
    Number of linked documents to collect:
    For crawling websites: This option determines the maximum number of documents that are indexed by the crawler during each crawling session. The number of indexed documents includes documents that are reindexed as their content changed.
    Stop collecting after (minutes):
    This sets the maximum number of minutes the crawler might run in a single session for websites.
    Note: The timeout that you set here works as an approximate time limit. It might be exceeded by some percentage. Therefore, allow some tolerance.
    Stop fetching document after (seconds):
    This indicates the time that the crawler spends trying to fetch a document. This sets the maximum time limit in seconds for completing the initial phase of the HTTP connection that is for receiving the HTTP headers. This time limit must be finite as it is used to prevent the crawler from getting stuck infinitely on a bad connection. However, it allows the crawler to fetch large files, which take a long time to fetch, for example compressed files.

Setting the advanced parameters for a content source

When you create a new content source, some of the Advanced Parameters that you can specify are as follows:
  • Click the Advanced Parameters tab.
  • Make your selection from the following options by selecting from the drop-down lists, marking the check boxes, or entering data as required:
    Number of parallel processes:
    This parameter determines the number of threads the crawler uses in a crawling session.
    Default character encoding:
    This parameter sets the default character set that the crawler uses if it cannot determine the character set of a document.
    Note: The entry field for the Default character encoding contains the initial default value windows-1252, regardless of the setting for the Default Portal Language under Administration menu > Portal Settings > Global Settings. Enter the required default character encoding, depending on your portal language. Otherwise, documents might be displayed incorrectly under Browse Documents.
    Always use default character encoding:
    If you check this option, the crawler always uses the default character set, regardless of the document character set. If you do not check this option, the crawler tries to determine the character sets of the documents.
    Obey Robots.txt
    If you select this option, the crawler observes the restrictions that are specified in the file robots.txt when it accesses URLs for documents. This option is only available for content sources of type website. This option is not available with Portal site, or seedlist provider.
    Proxy server: and Port:
    The HTTP proxy server and port that is used by the crawler. If you leave this value empty, the crawler does not use a proxy server.

Configuring the Scheduler

To configure the schedule, click the Scheduler tab. The Scheduler shows two boxes:
  • Define Schedule. Use this box to add new schedule.
  • Scheduled Updates. This box shows a schedule at which crawls are done.
Note: The time interval between the crawler runs must be more than the maximum crawler execution time. The reason is that a crawler cannot be started if it is running. If a crawler job is started while the crawler is running, this execution is ignored. And the crawler is only started at the next scheduled time, if it is not running already.

Configuring the Filters

The crawler filters control the crawler progress and the type of documents that are indexed and cataloged. To configure filters, click the Filters tab. You can define new filters in the Define Filter Rules box. The defined filters are listed in the Filtering Rules box.

Crawler filters are divided into the following two types:
URL filters
They control which documents are crawled and indexed, based on the URL where the documents are found.
Type filters
They control which documents are crawled and indexed, based on the document type.

If you define no filters at all, all documents from a content source are fetched and crawled. If you define include filters, only those documents, which pass the include filters are crawled and indexed. If you define exclude filters, they override the include filters, or, if you define no include filters, they limit the number of documents that are crawled and indexed. More specifically, if a document passes one of the include filters, but also passes one of the exclude filters, it is not crawled, indexed, or cataloged.

You can do the following tasks with the Filters box:
Creating a filter
When you use the option Apply rule while Collecting documents with Rule type: Include, make sure that the URL in the field Collect documents linked from this URL: fits the specified rule; otherwise no documents are collected. For instance, crawling the URL http://www.ibm.com/products with the URL filter */products/* does not give any results because the rule has a trailing slash, but the URL does not. But either crawling http://www.ibm.com/products/ with the URL filter */products/* (both with trailing slash) or crawling http://www.ibm.com/products with the URL filter */products* (no trailing slash) works.

Configuring security for a content source

You can configure the security for indexing secured content sources and repositories that require authentication. To configure the security for a content source, click the Security tab. Manage Search shows two boxes:
  • Define Security Realm. Use this box to add new secured content sources.
  • Security realms. This box shows a list of existing security realms.
In the Define Security Realm box enter the following data entry fields:
  • User Name. Enter the user ID with by which the crawler can access the secured content source or repository.
  • Password. Enter the password for the user ID that you completed under User Name.
  • Host name. Enter the name of the server. For Portal sites and seedlist providers this entry is not required. If you leave it blank, the host name is inferred from the provided root URL.
  • Realm. Enter the realm of the secured content source or repository.

Starting to collect documents from a content source

To start an update from a content source manually, proceed by the following steps:
  1. Click Start Crawler for the content source for which you want to start the update. This action updates the contents of the content source by a new run of the crawler. It fetches the documents from this content source. If they are new or modified, they are updated in the search collection. While a crawl on the content source is running, the icon changes to Stop Crawler. Click this icon to stop the crawl. Portal Search refreshes different content sources as follows:
    • For website content sources, documents that were indexed before and still exist in the content source are updated. Documents that were indexed before, but no longer exist in the content source are retained in the search collection. Documents that are new in the content source are indexed and added to the collection.
    • For WebSphere Portal sites, the crawl adds all pages and portlets of the portal to the content source. It deletes portlets and static pages from the content source that were removed from the portal. The crawl works similarly to the option Regather documents from Content Source.
    • For IBM Web Content Manager sites, Portal Search uses an incremental crawling method. Additionally to added and updated content, the Seedlist explicitly specifies deleted content. In contrast, clicking Regather documents from Content Source starts a full crawl; it does not continue from the last session, and it is therefore not incremental.
    • For content sources created with the seedlist provider option, a crawl on a remote system that supports incremental crawling, such as IBM Connections, behaves like a crawl on a Web Content Manager site.
  2. To view the updated status information about the progress of the crawl process, click Refresh. The following status information is updated:
    Documents
    Shows how many documents the crawler fetched so far from the selected content source.
    Run time
    Shows how much time the crawler used so far to crawl the content source.
    Status
    Shows whether the crawler for the content source is running or idle.

Verifying the address of a content source

Use the option Verify Address to verify the URL address of a selected content source.

Locate the content source, which you want to verify and click Verify Address for that content source. If the web content source is available and not blocked by a robots.txt file, Manage Search returns the message Content Source is OK. If the content source is invalid, inaccessible, or blocked, Manage Search returns an error message.

When you create a new content source, Manage Search starts the Verify Address feature.

Search Scopes and Custom Links 

With Search Scopes you can view and manage search scopes and custom links. The search scopes are displayed to users as search options in the drop-down list of the search box in the banner and in the Search Center portlet. Users can select the scope relevant for their search queries. You can configure scopes by one of the following ways:
  • One or more search locations (content sources).
  • Document features or characteristics, such as the document type.
WebSphere Portal includes these scopes:
All Sources
This scope includes documents with all features from all content sources in the search by a user.
Managed Web Content
This scope restricts the search to sites that were created by Web Content Management.

You can add your own custom search scopes. You can add an icon to each scope. Users see this icon for the scope in the pull-down selection list of scopes.

You can also add new custom links to search locations. This custom link includes links to external web locations, such as Google or Yahoo. The Search Center global search lists the custom links for users in the selection menu of search options.

Managing Search Scopes and Custom Links

From the Search Scopes and Custom Links panel, select the following options or icons and do the following tasks on search scopes and custom links:
  • New Scope. Click this option to create a new search scope. For details, refer to Creating a new search scope.
  • Refresh. Click this option to refresh the list of search scopes. This action updates the information for the scopes, for example, the status of scopes, or updates that another administrator made on scopes.
  • Move Down and Move Up arrows. Click these arrows in the list to move search scopes up and down in the list. This action determines the sequence by which the scopes are listed in the drop-down menu from which users select search options for their searches with the Search Center portlet.
  • Edit Search Scope. Click this icon to work with a search scope or modify it. For details, refer to Editing a search scope.
  • Delete Search Scope. Click this icon to delete a search scope.
  • New Custom Link. Click this option to add new custom link. For details, refer to Adding a new custom link.
  • Edit Custom Link. Click this icon to work with a custom link or modify it.
  • Delete Custom Link. Click this icon to delete a custom link.
Note: Users must clear their browser cache for your changes to take effect. For example, for a new scope to be available, or for the new default scope to be shown in the correct position.

Creating a new search scope

To create a new search scope, click the New Scope button. Manage Search displays the New Search Scope page. Enter the required data in the fields and select from the available options:
Scope Name:
Enter a name for the new search scope. The name must be unique within the current portal or virtual portal. This field is mandatory.
Custom Icon URL:
Enter the URL location where the portal can locate the scope icon that you want to be displayed with the search options for users. If the icon file exists in the default icon directory wps/images/icons, you need to type only the icon file name. If the icon file is in a different directory path, type the absolute file path with the file name. Click Check icon path to ensure that the icon is available at the URL you specified.
Status:
Set the status of the search scope as you require. To make the scope available to users, set the status to Active.
Visible to anonymous users:
Select Yes to make the search scope available to users who use your portal without logging in. Select No to make the scope available to authenticated users only.
Query text (optional):
Enter a query text. This query text is invisibly appended to all searches in this scope. Search by users return results that match both the user search and the query text that you enter in this field. Both sets of results are weighted with the same relevance in the result list. The query text that you enter must conform to the syntax rules of entering a query in the Search Center. For more information about these query syntax rules, see the Search Center portlet help.
Select Locations
Select the location(s) as required. Only documents from these search locations or content sources are searched when users select this scope for their search.
Note: The location tree also shows content sources that are deleted if they still contain documents in the collection. After a deleted content source has no documents, the cleanup daemon will remove it from the location tree.

To set names and descriptions for the search scope, you must create and save the scope first. Then, locate the scope on the scopes list, and edit the scope by clicking the Edit ion. The option for setting names and descriptions in other locales is available only on the Edit Search Scope page.

Note: If you modify a content source that belongs to a search scope, update the scope manually to make sure that the scope still covers that content source. Especially if you changed the name of the content source, edit the scope and make sure that it is still listed there. If not, add it again.

Library | Support | Terms of use |

Thursday, May 7, 2015 12:23am EST

Copyright IBM Corporation 2000, 2014.