-
Notifications
You must be signed in to change notification settings - Fork 762
Streamlined Checkpointing Design Details
Alex Osborne edited this page Jul 4, 2018
·
2 revisions
- split URI processing into two phases: that which is transient (can be thrown away as long as URI is retried) and that which changes persistent stats or structures (which should complete to consistency before checkpoint proceeds)
- step right after laggy network fetch is threshold between phases
- initiating checkpoint requires exclusive lock on a frontier-atomic-mutation-lock that is usually available to multiple threads
- allow holding URIs after fetch - semi-paused crawler - so checkpoint can occur as soon as all needing-persistence processing finishes (but without waiting for any fetches to complete)
- using running on-disk structures that can just be frozen/bookmarked, rather than requiring fresh dump/copy. (eg: like LVM snapshots)
- move as much simply copying outside crawler process: checkpoint is mostly manifest of files to restore crawl; it's up to operator to copy those elsewhere if desired
- each object responsible for own checkpointing
- deemphasize serialization; perform most component-state-saving to a loose textual format (JSON or XML) for easier restore-to-altered-code or offline hand-editting
- optional Checkpoint component in config; if present all components should restore from it
- wiring is transient – always comes from configuration
Each crawler component with checkpointable state will implement a distinguished interface that both allows a request they checkpoint their state, and an (optional) startState property.
When a checkpoint is triggered/requested, they will write their state to the provided location (possibly including pointers to pre-existing in-crawl files that make up part of their state).
When a crawl is started, if the startState property is configured, they will perform an additional load-from-disk of their state.
Structured Guides:
User Guide
- Introduction
- New Features in 3.0 and 3.1
- Your First Crawl
- Checkpointing
- Main Console Page
- Profiles
- Heritrix Output
- Common Heritrix Use Cases
- Jobs
- Configuring Jobs and Profiles
- Processing Chains
- Credentials
- Creating Jobs and Profiles
- Outside the User Interface
- A Quick Guide to Creating a Profile
- Job Page
- Frontier
- Spring Framework
- Multiple Machine Crawling
- Heritrix3 on Mac OS X
- Heritrix3 on Windows
- Responsible Crawling
- Adding URIs mid-crawl
- Politeness parameters
- BeanShell Script For Downloading Video
- crawl manifest
- JVM Options
- Frontier queue budgets
- BeanShell User Notes
- Facebook and Twitter Scroll-down
- Deduping (Duplication Reduction)
- Force speculative embed URIs into single queue.
- Heritrix3 Useful Scripts
- How-To Feed URLs in bulk to a crawler
- MatchesListRegexDecideRule vs NotMatchesListRegexDecideRule
- WARC (Web ARChive)
- When taking a snapshot Heritrix renames crawl.log
- YouTube
- H3 Dev Notes for Crawl Operators
- Development Notes
- Spring Crawl Configuration
- Build Box
- Potential Cleanup-Refactorings
- Future Directions Brainstorming
- Documentation Wishlist
- Web Spam Detection for Heritrix
- Style Guide
- HOWTO Ship a Heritrix Release
- Heritrix in Eclipse