Information Lifecycle Management (ILM) is the process of understanding, archiving and purging data. In short, it is a strategy for managing business data over its lifetime in order to reduce storage costs and improve data access within the database. Further, It is the practice of applying policies for the effective management of information throughout its useful life.
ILM consists of the policies, processes, practices, and tools that are used to align the business value of information with the most appropriate and cost-effective IT infrastructure from the time information is conceived through its final disposition. It leverages compression and storage tearing. In some cases, compression may be sufficient. In others, you may need to optimize dormant data further by moving it to high-density, low-cost storage.
Partitioning, introduced in Oracle 8, is one option for utilizing storage tiering or compression tiering for dormant data. Partitioning is manually implemented.
In Oracle Database 11g, life cycle event scanning and subsequent actions are performed manually. In Oracle Database 12c, new solutions allow the setting of policies that define application-specified rules for information lifecycle management. The rules enforce data flows automatically with minimal manual intervention.
One of the available options for ILM, Automatic Data Optimization (ADO) introduced in Oracle Database 12c, provides policies to automatically compress data according to user-defined rules. It can also automatically move data to satisfy both space pressure and data management requirements.
A typical data flow is represented below:
Oracle Database 12c allows for activity tracking with Heat Map. It provides the ability to track and mark data as it goes through life cycle changes. Data accesses are tracked at the segment-level, for example, at the table or index level. Data modifications are tracked at block and segment-level. Block-level and segment-level statistics are collected in memory and stored in tables in the SYSAUX tablespace.
ADO allows for the creation of policies that use Heat Map statistics to compress and move data only when necessary. It automatically evaluates and executes policies that perform compression and storage tiering actions. This provides the users the ability to keep both active operational data and dormant archived data in the same database tables. Applications can easily access only data that is in an operationally active state although archived data is kept in the same table.
Stay tuned, heat mapping and ADO will be discussed in detail in future postings.