database discovery

Results 1 - 7 of 7Sort Results By: Published Date | Title | Company Name
Published By: AWS     Published Date: Jul 26, 2019
What you'll learn in this webinar: When you migrate your databases and applications from Oracle to Amazon Aurora, you can take advantage of the modern, scalable cloud infrastructure available on Amazon Web Services (AWS) to optimize your operations so you can focus on innovation. Clckwrk analyzes your existing database environment and creates a plan to embrace an elastic, scalable, cloud-native database solution that can grow with your business and help you eliminate the exorbitant costs of commercial database licenses. Watch this webinar to learn how: You can accelerate your migration off Oracle databases to Amazon Aurora with minimized disruption to your business You can leverage refactor code to work in your new database Clckwrk can help you establish a unique migration strategy for your needs, supported by a consultation practice that covers discovery to implementation
Tags : 
    
AWS
Published By: IBM     Published Date: Sep 22, 2011
Companies need capabilities for identifying data assets and relationships, assessing data growth and implementing tiered storage strategies-capabilities that information governance can provide. It is important to classify enterprise data, understand data relationships and define service levels. Database archiving has proven effective in managing continued application data growth especially when it is combined with data discovery.
Tags : 
ibm, application data, technology, enterprise, database archive
    
IBM
Published By: IBM Corporation     Published Date: Jun 09, 2011
This Research Brief categorizes databases as a "dangerous and growing security gap" - and offers steps to improve database security across the enterprise.
Tags : 
ibm, guardium, database security, risk, database discovery, vulnerability scanning, penetration testing, user monitoring, logging, encryption, policy-based enforcement, retirement, mutual funds, college planning, insurance, credit, taxes, mortgage/real estate, savings, financial advisors
    
IBM Corporation
Published By: mindSHIFT     Published Date: Nov 29, 2007
Have you adjusted your data retention policies and electronic discovery procedures to comply with the new Federal Rules of Civil Procedure (FRCP)? Learn how email archiving can help you with these electronic discovery requirements.
Tags : 
frcp, secure content, secure data, data protection, database security, compliance, frcp compliance, data governance, e-discovery, legal, law firm, mindshift
    
mindSHIFT
Published By: SAS     Published Date: Oct 18, 2017
When designed well, a data lake is an effective data-driven design pattern for capturing a wide range of data types, both old and new, at large scale. By definition, a data lake is optimized for the quick ingestion of raw, detailed source data plus on-the-fly processing of such data for exploration, analytics and operations. Even so, traditional, latent data practices are possible, too. Organizations are adopting the data lake design pattern (whether on Hadoop or a relational database) because lakes provision the kind of raw data that users need for data exploration and discovery-oriented forms of advanced analytics. A data lake can also be a consolidation point for both new and traditional data, thereby enabling analytics correlations across all data. To help users prepare, this TDWI Best Practices Report defines data lake types, then discusses their emerging best practices, enabling technologies and real-world applications. The reportís survey quantifies user trends and readiness f
Tags : 
    
SAS
Published By: SAS     Published Date: Mar 06, 2018
When designed well, a data lake is an effective data-driven design pattern for capturing a wide range of data types, both old and new, at large scale. By definition, a data lake is optimized for the quick ingestion of raw, detailed source data plus on-the-fly processing of such data for exploration, analytics, and operations. Even so, traditional, latent data practices are possible, too. Organizations are adopting the data lake design pattern (whether on Hadoop or a relational database) because lakes provision the kind of raw data that users need for data exploration and discovery-oriented forms of advanced analytics. A data lake can also be a consolidation point for both new and traditional data, thereby enabling analytics correlations across all data. With the right end-user tools, a data lake can enable the self-service data practices that both technical and business users need. These practices wring business value from big data, other new data sources, and burgeoning enterprise da
Tags : 
    
SAS
Published By: SAS     Published Date: Aug 28, 2018
When designed well, a data lake is an effective data-driven design pattern for capturing a wide range of data types, both old and new, at large scale. By definition, a data lake is optimized for the quick ingestion of raw, detailed source data plus on-the-fly processing of such data for exploration, analytics and operations. Even so, traditional, latent data practices are possible, too. Organizations are adopting the data lake design pattern (whether on Hadoop or a relational database) because lakes provision the kind of raw data that users need for data exploration and discovery-oriented forms of advanced analytics. A data lake can also be a consolidation point for both new and traditional data, thereby enabling analytics correlations across all data. To help users prepare, this TDWI Best Practices Report defines data lake types, then discusses their emerging best practices, enabling technologies and real-world applications. The reportís survey quantifies user trends and readiness f
Tags : 
    
SAS
Search Resource Library