Home » Articles posted by venushuynh

Author Archives: venushuynh

Best Practices for SAP HANA Data Loads – SAP HANA

I’ve been involved in a few SAP HANA projects where performance has been initially disappointing, and I have been asked to come and show the customer HANA speed. As I’ve…

Source: Best Practices for SAP HANA Data Loads – SAP HANA


System Generated Schemas in SAP Hana



_SYS_REPO is a system user. Its schema is a system schema. You don’t touch any of those.

When you create new application schemas in your security context, e.g. under your user, then it’s necessary to grant _SYS_REPO the SELECT WITH GRANT privilege on your schema, if you want to build and activate models that read from your schema.

Think of _SYS_REPO as “the activation guy”. It takes your models and creates the necessary runtime objects from them (_SYS_BIC schema, several BIMC_*-tables and other places keep these runtime information). Now, after having activated all your models that access data in your schemas,_SYS_REPO wants to give you (and probably other users) read access to the activated models. Part of that is allowing read access to your data.

That’s what _SYS_REPO needs the privilege for your schema for. Allowing others the access to your schema.

Since not all schemas in the database will be used in models, there is no automatic assignment of this privilege to _SYS_REPO.
As part of your security strategy you need to actively assing _SYS_REPO the permission to every schema you want to build models on.

SAP Hana


Data Modeling and Operation in SAP HANA



An Index or Disaster, You Choose (It’s The End Of The World As We Know It)

Richard Foote's Oracle Blog

This come up recently with a customer and I saw a related question on OTN that made me realize I haven’t previously discussed the SKIP_UNUSABLE_INDEXES parameter.

Let me start by making a very simple point. All it potentially takes to make your entire database effectively inoperableor indeed to maybe bring down a RAC instance is for one teeny weeny index to become unusable. That’s it.

If the unusable/missing index is used often enough on a table that is big enough, the resultant explosion of load on the database generate by 10,000s of (say) massive Full Table Scans can be catastrophic on all users of the database (or indeed on all users of the disk subsystem or database servers as in the case of my customer).

Sometimes the lesser evil would be to simply ensure the statement(s) that were reliant on the now unusable index are prevented from executing and causing general havoc…

View original post 419 more words

Avoiding ORA-04068: existing state of packages has been discarded

Mark Hoxey

If you’re trying to run your database as close to 24×7 as possible, with no downtime for updates, then one thing you will need to avoid is the following error:

ORA-04068: existing state of packages has been discarded

Package state comes about when a package has variables or cursors defined at the package level, as opposed to local variables within procedures and functions. According to the PL/SQL documentation for Oracle 11.2:

The values of the variables, constants, and cursors that a package declares (in either its specification or body) comprise its package state. If a PL/SQL package declares at least one variable, constant, or cursor, then the package is stateful; otherwise, it is stateless.

Whenever a package is recompiled any package state held by a current session is invalidated and ORA-04068 occurs the next time the session tries to use the package again. Subsequent calls to the package by…

View original post 1,108 more words

; and /


Restore and Recovery Tactics

Control Files: what if we lost all control files.
-> Create control files from scrash.
The good practise for this case is everytime we change DB structure -> Backup control file to trace

alter database backup controlfile to trace as ‘c:\oracle\product\control_bk.sql’;
–In this file, we can find sql to create controlfile.
With this case, database will startup no mount
-> alter database open;

Redo Logs: recovery
Case : Steps need to process when system corrupts.
1.Get the lastest backup to restore
2.Apply archive log (use RMAN to find)
3.Apply the most current redo log, which hasn’t updated to archive long

select a.group#, member, a.status, bytes
from v$log a, v$logfile b
where a.group# = b.group#
order by 1;

How to recover when thing goes wrong with redo/archive log

–We can alter size redo logfile to make checkpoint less offen
2 Options
2.1. Drop and recreate for inactive
alter database drop logfile group1;
alter database add logfile group1 (‘C:\oracle\….\redo1.log’) size 100M;

For current log,
alter system switch logfile;
alter database drop logfile group1;
alter database add logfile group1 (‘C:\oracle\….\redo1.log’) size 100M;

2.2. Alter logfile extend

–For case Undo tablespce crash
To simulate situation, we can shutdown DB, then rename Undo tbs datafile
-> Start DB -> ORA-01157
-> Check alert log
show parameter undo;

>alter system set undo_management = manual scope = spfile;
>aler database datafile ”  drop;

> shutdown immediate;
> startup;

> create undo tablespace undo datafile ‘..\UNDO01.DBF’ size 10M;
> alter system set undo_management =auto scope=spfile;
> shutdown immediate;
> start up;
select tablespace_name, contents
from dba_tablespaces;

> show parameter undo;
–if the old undo info is there, then
alter system drop ….;
alter system set undo_tablespace = ”;

Recovering non-critical datafiles

Recovering critical datafiles

Tuning : Monitoring Recovery

Startup:    NoMount                           ==>  Mount         ==> Open
(spfiles.ora or spfile.ora or initSID.ora)   (Control Files)    (Datafiles, Redo longs, Everthing in Sync)