Sybase Tools & Utilities

 

Current Sybase Environment

Q: Which version of Sybase ASE are we using? Are there multiple instances/environments?
A: We’re currently on Sybase ASE 15.7, with separate dev, test, and production instances. Dev and test run on version 15.0 but production is on 15.7.

Q: Are there any custom configurations (e.g., page size, tempdb, buffer cache tuning)?
A: Yes, the page size is set to 8 KB instead of the default 2 KB for better I/O efficiency. We also tuned the buffer cache to 4 GB due to high concurrent access. Tempdb is isolated on a separate disk for performance.

Q: What is the size of each Sybase database, and how fast is it growing?
A: The main OLTP database is around 500 GB, growing at approximately 10 GB per month due to heavy transaction volume. The reporting database is 1.2 TB but grows more slowly.

Q: Are we using any Sybase-specific features (like partitioning, text/image datatypes, LOBs, proxy tables)?
A: Yes, we use partitioned tables on the sales data for performance, and several text/image columns for storing customer documents. We also use proxy tables to link data from a legacy Informix system.

Q: How is user authentication handled in Sybase (e.g., LDAP integration, roles, login triggers)?
A: We use LDAP integration for centralized user authentication, with Sybase roles mapped accordingly. Some login triggers are in place to enforce password policies.


🧱 Database Schema & Code

Q: What’s the complexity of the schema—do we have deeply nested views, or circular dependencies?
A: The schema is moderately complex. We have several nested views up to 4 layers deep. Circular dependencies are rare but exist in a few stored procedures managing hierarchical data.

Q: How many stored procedures, triggers, and user-defined functions do we have?
A: About 1500 stored procedures, 200 triggers mostly on transaction tables, and 50 user-defined functions for date and string manipulation.

Q: Are there undocumented or dynamic SQL procedures that will be hard to track or migrate?
A: Yes, roughly 10% of stored procedures use dynamic SQL for flexible reporting, which may require manual review during migration.

Q: Are any system tables or undocumented features being used?
A: We use some system tables to audit transaction logs, but no undocumented features to our knowledge.

Q: Is there a naming convention or structure in stored procedures that implies business logic?
A: Yes, procedures prefixed with “usp_” are user-level operations, “sp_” are system processes, and names usually reflect the business domain, e.g., usp_CustomerOrders.


⏱️ Performance & Monitoring

Q: Are we using Query Plans (set showplan on) or tools like sp_showplan, dbcc for performance tuning?
A: Yes, DBAs regularly use set showplan on and sp_showplan to analyze slow queries. DBCC checks run nightly for database integrity.

Q: Are there long-running jobs or queries that are sensitive to performance changes post-migration?
A: Our nightly batch jobs generating sales reports run for several hours and are sensitive. Real-time OLTP queries require low latency and must not be impacted.

Q: Are there known performance issues in Sybase today (blocking, deadlocks, I/O waits)?
A: We experience occasional deadlocks on the order processing tables during peak hours. I/O waits are minimal due to SSD storage.

Q: Do we use any caching or materialized views (e.g., summary tables) that need special handling?
A: Summary tables are used for monthly reports and updated via ETL nightly; these will need to be migrated and validated carefully.


⚙️ Jobs, Scheduling, and Integration

Q: Are there scheduled jobs or batch processes running via cron, at, or Sybase Job Scheduler?
A: Yes, several batch jobs run via Sybase Job Scheduler and cron scripts that perform backups, ETL loads, and report generation.

Q: Are there linked servers, proxy tables (e.g., remote table access via srvclass) in use?
A: Proxy tables connect to an Oracle data warehouse for consolidated reporting, used in some cross-system queries.

Q: Is the database integrated with ETL tools, third-party apps, or external services using ODBC/JDBC?
A: We use Informatica for ETL jobs and ODBC connections from a Java-based CRM application.


🔐 Security & Auditing

Q: What are the key roles, permissions, or security groups defined in Sybase?
A: Roles include db_owner, db_datareader, db_datawriter, with custom roles like sales_readonly and etl_write.

Q: Are there any user-defined auditing mechanisms in place (e.g., login/logout tracking)?
A: Yes, we log login/logout events to a custom audit table and track failed login attempts.

Q: Do we use encrypted columns, obfuscation, or data masking at the Sybase level?
A: No native encryption, but sensitive fields like credit card numbers are stored encrypted by the application layer before insertion.


📤 Data Export & Migration

Q: What method are we using to extract data—bcp, dump/load, custom scripts, replication?
A: Mostly bcp export/import for large tables, with some replication configured for near real-time syncing during migration.

Q: Are there large tables with billions of rows or history tables that might require partitioned or incremental export?
A: Yes, the transaction history table has over 2 billion rows and is partitioned by year. We plan incremental exports by date ranges.

Q: Do we have to deal with IDENTITY columns or timestamp fields—how are those being migrated?
A: IDENTITY columns are preserved by setting identity insert ON during migration. Timestamp fields are migrated as-is but tested for compatibility.


🧪 Testing and Compatibility

Q: Have we identified any Sybase-specific syntax or functions that don't translate directly (e.g., isnull, patindex, convert)?
A: Yes, functions like patindex and str_replace need rewriting or replacement in the target DB.

Q: Are there test suites or baseline query outputs available to validate post-migration results?
A: Yes, a suite of stored procedures with test data and expected results is available for regression testing.

Q: Have we done a dry run or proof-of-concept migration to understand Sybase-specific gaps?
A: A POC was done on a subset of tables and stored procedures, revealing some dynamic SQL incompatibilities and datatype mapping issues.


🛠️ Sybase Tools & Utilities

Q: Are we using Sybase Central, DBISQL, ASE Cockpit, or custom monitoring tools?
A: DBAs mainly use ASE Cockpit for monitoring, with Sybase Central for schema management. Custom scripts monitor disk space and backup status.

Q: Do we have any auditing or DDL capture tools running on Sybase?
A: We use DDL triggers to capture schema changes and log them for compliance.

Q: Is there a centralized logging or alerting system connected to Sybase (e.g., via email, SNMP traps)?
A: Alerts for critical errors are sent via email, and logs are forwarded to Splunk for centralized analysis.


🧠 Knowledge Transfer & Documentation

Q: Is there documentation available for Sybase procedures, database schemas, or dependencies?
A: Basic schema docs and some stored procedure specs exist but need updating.

Q: Are there any Sybase veterans or business SMEs I can reach out to for tribal knowledge?
A: Yes, DBAs John and Maria have been managing Sybase for 10+ years and are great resources.

Q: Is there an ERD or any visual representation of the Sybase schema?
A: An ERD exists for core modules but not for the entire database. It’s maintained in Sparx Enterprise Architect.


Bonus Tip:

Q: Which Sybase objects are the most critical or fragile that we should treat with extra care?
A: The usp_ProcessOrder stored procedure is critical and complex, affecting order fulfillment. Also, the transaction_history partitioned table requires careful handling due to size and dependencies.

Comments

Popular posts from this blog

SyBase Database Migration to SQL Server

Basics of US Healthcare -Medical Billing