Indexing your office documents with Elastic and FSCrawler David Pilato Developer | Evangelist, Community @dadoonet

The Elastic Search Platform Enterprise Search Observability Security Kibana Explore, Visualize, Engage Elasticsearch Store, Search, Analyze Integrations Connect, Collect, Alert Public cloud Hybrid On-premises

Parsing a stream and getting content and metadata static void extractTextAndMetadata(InputStream stream) throws Exception { BodyContentHandler handler = new BodyContentHandler(); Metadata metadata = new Metadata(); try (stream) { new DefaultParser().parse(stream, handler, metadata, new ParseContext()); String extractedText = handler.toString(); String title = metadata.get(TikaCoreProperties.TITLE); String keywords = metadata.get(TikaCoreProperties.KEYWORDS); String author = metadata.get(TikaCoreProperties.CREATOR); } }

ingest-attachment plugin extracting from BASE64 or CBOR 6

An ingest pipeline

ingest-attachment processor plugin using Tika behind the scene

Demo https://cloud.elastic.co 9

FSCrawler You know, for files… 10

Disclaimer This project is a community project. It is not officially supported by Elastic. Support is only provided by FSCrawler community on discuss and stackoverflow. http://discuss.elastic.co/ https://stackoverflow.com/questions/tagged/fscrawler

FSCrawler Architecture FSCrawler Local Dir JSON (noop) Mount Point XML SSH / SCP / FTP Apache Tika ES 6/7/8 HTTP Rest Inputs Filters Outputs

FSCrawler Key Features • • • Much more formats than ingest attachment plugin OCR (Tesseract) Much more metadata than ingest attachment plugin (See https://fscrawler.readthedocs.io/en/latest/admin/fs/elasticsearch.html#generated-fields) • Extraction of non standard metadata

Documentation • • • • https://fscrawler.readthedocs.io/ https://fscrawler.readthedocs.io/en/latest/user/tutorial.html https://fscrawler.readthedocs.io/en/latest/user/formats.html https://fscrawler.readthedocs.io/en/latest/admin/fs/index.html https://fscrawler.readthedocs.io/en/latest/

Demo https://cloud.elastic.co 16

FSCrawler even better with a UI 17

FSCrawler Architecture FSCrawler Local Dir JSON (noop) Mount Point XML SSH / SCP / FTP Apache Tika WP 7/8 Filters Outputs ES 6/7/8 HTTP Rest Inputs

Demo https://cloud.elastic.co 19

Be t 8. a 2 Network drives connector package for Enterprise Search https://github.com/elastic/enterprise-search-network-drives-connector/

FSCrawler v3 Roadmap (“It depends”) 21

Extended CLI parameters https://github.com/dadoonet/fscrawler/issues/857 $ bin/fscrawler —input.fs.dir=/path/to/files \ —filter.tika.indexed_chars=100% \ —output.elasticsearch=https://localhost:9200 $ bin/fscrawler —input.fs.dir=/path/to/files \ —filter.tika.lang_detect=true \ —output.wpsearch=https://localhost:3002

Add support for plugins (inputs, filters and outputs) with pf4j https://github.com/dadoonet/fscrawler/issues/1114

Add rsync input https://github.com/dadoonet/fscrawler/issues/377 $ bin/fscrawler —input.rsync.port=14415 $ rsync —port=14415 -r example localhost::Uploads

Add S3 input https://github.com/dadoonet/fscrawler/issues/377 $ bin/fscrawler —input.s3.object=s3://foo/bar.txt $ bin/fscrawler —input.s3.bucket=s3://foo

Add Dropbox input https://github.com/dadoonet/fscrawler/issues/264 $ bin/fscrawler —input.dropbox.access_token=XYZ \ —input.dropbox.dir=/path/to/files

Add Beats output https://github.com/dadoonet/fscrawler/issues/682 FSCrawler Local Dir JSON (noop) Mount Point XML SSH / SCP / FTP Apache Tika HTTP Rest Inputs ES 6/7/8 WP 7/8 Beats Filters Outputs

Add Beats output https://github.com/dadoonet/fscrawler/issues/682 $ bin/logstash -e ’ input { beats { port => 5044 } } output { elasticsearch { hosts => [“https://localhost:9200”] } }’ $ bin/fscrawler —output.beats.url=https://localhost:5044

Manage jobs from the REST Service https://github.com/dadoonet/fscrawler/issues/1549 # Create curl -XPUT http://127.0.0.1:8080/_jobs/my_job -d ‘{ “type”: “fs”, “fs”: { “url”: “file://foo/bar.txt” } } # Start / Stop curl -XPOST http://127.0.0.1:8080/_jobs/my_job/_start curl -XPOST http://127.0.0.1:8080/_jobs/my_job/_stop # Job info and status curl -XGET http://127.0.0.1:8080/_jobs/my_job # Remove the job curl -XDELETE http://127.0.0.1:8080/_jobs/my_job

Read from any FS Provider using the REST Service https://github.com/dadoonet/fscrawler/issues/1247 curl -XPOST http://127.0.0.1:8080/_upload -d ‘{ “type”: “fs”, “fs”: { “url”: “file://foo/bar.txt” } } curl -XPOST http://127.0.0.1:8080/_upload -d ‘{ “type”: “s3”, “s3”: { “url”: “s3://foo/bar.txt” } }

Other ideas • • • • New local file crawling implementation (WatchService): #399 Store jobs, configurations, status in Elasticsearch: #717 Switch to ECS format for the most common fields: #677 Extract ACL informations: #464 https://fscrawler.readthedocs.io/en/latest/

Thanks! PR are warmly welcomed! https://github.com/dadoonet/fscrawler