I don't know why people do that hint: if you are running the code, the code is already in the shared pool - the package you pinned, was successfully loaded you say "the sga was huge" and "i made the sga smaller", but you don't tell me about the shared pool - which is the only relevant thing here.
Since there are many ways to size and shape the SGA, there cannot be a "best way" or a way that is universally better than any other way. Will flushing the shared pool fix our problem? Joe, May 27, - pm UTC. So when ever the "update" statement is executed, it is using the cached plan instead of the determining that a better plan exists.
Would flushing the cache fix this problem by forcing the CBO to regenerate a new plan? Is there a better way to fix this? Thanks, Joe. It cannot be what you dba says, not possible. If the "update" statement is using every column from the table in the "where" clause, why doesn't the CBO use the unique index? I just found out the same problem is happening in our UA environment.
When I drop the wrong index, the plan uses the unique index. As soon as I re-create the "wrong" index, it starts using that index instead of the unique index. Thanks again, Joe. May 28, - am UTC. Can you please let me know why reducing shared pool has helped here..?.. When you say that objects are already there in shared pool..
It's 9. Thanks a lot for your answers - I hardly find such discussions anywhere else. You Rock. Rgds Jatin. You have missed my post, I resized shared pool to MB.
Secondly, what is the generic advice in warehouse environments for error..? I am getting out of ideas..
May 28, - pm UTC. It is highly unusual for a true warehouse to have shared pool issues due to parsing. Tom, I am not sure how I can give you an example - whenever I try create a smaller sample table, the CBO uses the correct index. Looking at the plan, it shows the Cost and Cardinality are the same for both indexes, but the non-unique index is taking over 30 seconds to return.
Is there some other statistics that I could show you? I am not sure how I can give you an example So, it used an index that it could use entirely. Tom, Why does it still use the wrong index when I change the bind variables to real values? Below are 2 plans - the first is with the wrong index and the second shows that unique index forced by using INDEX hint. I started reading this site daily and it has been an incredible leaning tool for me. I don't know what "example" you are working with - but I see estimated row counts of "1" so as far as the optimizer is concerned - with your example - either one is the same, they both return ONE row.
I am hard pressed to comment. I would like to know that why it occur that. Regrads, New DBA. August 07, - pm UTC. I don't know what I'm looking at here. What are a, b, and c??? Kind regards, Peter. September 04, - pm UTC. No logins were allowed due ORA I do not consider this as bug, however it could be "nice to have" feature to free shared pool after an hour or so.
Dear Tom, good day to you, this may sound a dumb question but just wanted to clear my doubt. Thanks a lot for your help and time.
Regards, VS. September 16, - pm UTC. I am currently working on a project for about a year , the project is pretty old about years and started with oracle 7 or before and now using 10g. END; pleae note val1 , val2 are exact values not bind variable. And a job is been in place to flush the share pool every hour. After talking to the person who is been managing this database for a long time , it comes out that without flushing the pool , its cause lots of performance issues. I assume the response of my abovestatement would be bad architecture , we should change it.
But changing this is a huge effort , and probably is not going going to happen. Earlier number of transactions are less so performance was never been an issue , but nowadays more transaction , though the hardware configuration is increased a lot , but performance issue still remain. And it looks like this is one of the many reason of performance problem. Though I don't have a way to measure how the performance would be if the architecure would have been in proper way , so its hard to convince management.
My question: 1 Keeping the system architecture as it is , is there anything we can do , from the database configuration side to improve performance? Tom , I am awaiting for your response for the above followup questions.
Appreciate your reply. May 25, - pm UTC. I don't see all of the questions - sometimes when I travel, I just skip them because I don't have enough time. BEGIN proc1 val1,val2,.. Their lack of use of bind variables causes a huge performance issue all by itself - much much much larger than any shared pool fragmentation.
They should shrink the size of the shared pool to cause us to flush it internally ourselves more often. As it is - I'll guess "we made it really really big - but when it fills us - it takes a long time to empty by itself so we flush it every hour way before it fills up to make that take less time".
You know what would accomplish that as well? Making the pool smaller so we flush a smaller thing more often but only while the developers fix the gaping bug in their incredibly bad application. It doesn't work that way in real life! You want to observe change in response time - you'll have to make a change in your application.
That way you can sit down and start to look at the code. I firmly believe it will not be that big of a change to use binds - you should investigate that a bit.
Tom, Thank you very much for your reply. Some of the things you have mentioned already came to my mind before , but hearing the same from you always brings up some more value. Another thing which I heard from the person who is managing this database for long time is "Not flushing the shared pool causes some unexpected behaviour , like out of memory , session get dropped etc etc and system didn't work " , though I have not seen it or heard it anywhere before.
Just wondering if in case you come across this kind of problem anywhere? Just wondering if you can get me the reason for it and whether it has any impact or not?
We did things like add the large pool and such way back in the mid 's to try to alleviate the issue of "people flooding the shared pool with literal sql" Having 24 copies of something is absolutely normal in many cases. Thank you very much Tom , for your valuable input. Number of those multiple entries are comparatively large in numbers during the pick hours , do you think is it something which need to have attention? Diference is Size of the bind variable , I think it is made by implicit conversion , based on the size of the input, not sure where it did this implicit conversion , apllication software or Database.
Now most of those procedure has on an average input parameter. Do we need any attention for this? If yes , Is there anything from database side we can do to overcome this sitution. Hello there Tom, I found your asktom very useful with a lot of "hidden" information inside, so thank you for the good work.
On the topic, reading the reviews until now and the replies, you strongly recommend not flushing the pool and using the bind variables. The system is designed in a way to use bind variables, so this is one side covered.
The flaw is it is constantly creating "temp" tables in not temp tablespaces, and doing operations with those tables. After the operations are done, the system drops the tables. What do you suggest in this situation other than changing the code, which is external developer's and management won't approve? Thank you in advance for your response and don't mind deleting this review if you find it not relevant.
Best regards V. Of course not, the table it used no longer exists. This is a big mistake, one that can only be fixed by ripping the application apart and doing it correctly. Tom, One of our developers reported this issue.
I am not sure if this could be a possibility. We have a batch process where set of tables are populated to a high volume. Here is the description of the problem and workaround verbatim. This is a useful feature to clear existing data and re-load fresh data. Every dirty block in the buffer cache is written to the data files. That is, it synchronizes the datablocks in the buffer cache with the datafiles on disk.
It's the DBWR that writes all modified database blocks back to the datafiles. As a permanent fix, we have scheduled a corn job which is firing at the mid night to flush the shared pool. Posted by easyoradba at Email This BlogThis! No comments:. Newer Post Older Post Home. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. Post as a guest Name. Email Required, but never shown. The Overflow Blog. Does ES6 make JavaScript frameworks obsolete?
Podcast Do polyglots have an edge when it comes to mastering programming Featured on Meta. Now live: A fully responsive profile. Related
0コメント