SQL Statements

The EnterpriseOne SQL Statements category will include all posts and pages pertaining to SQL statements that can be used in the administration and configuration of EnterpriseOne.

Better Performance On Work with Payroll Transaction Ledger (P050999) With An Index On F0618

We have been getting complaints about the Work with Payroll Transaction Ledger (P050999) grid loading performance. So, I decided to take a look. What I found was that upon entering the application it does an automatic find on the grid using the following SQL statement with the current G/L date in the WHERE clause.

SELECT
YTDGL,YTMCU,YTSBLT,YTPAYM,YTPB,YTSBL,YTUAMT,YTPCUN,YTPRTR,YTDWK,YTSHRT,YTAN8,YTSHD,YTPPRT,YTPALF,YTBHRT,YTOBJ,YTGPA,YTSUB,YTCMTH,YTAUBP,
YTPHRW,YTPDBA,YTCO,YTAID,YTCRCD,YTCRDC
FROM PRODDTA.F0618
WHERE (YTDGL 

As you can see, this SQL statement is returning every record in the F0618 that has a G/L Date less than or equal to the system's current G/L Date. I don't know about your installation but the one that I was working on had 17+ million records. Why the heck would it need to return all that?

That screen was taking 8+ minutes to present information to the user.

With the help of my coworker Geordy and Technet, I came up with an index on the F0618 that didn't make the SQL statement finish any quicker, but it does present the user the first page of data extremely fast (LESS THAN 3 SECONDS!).

CREATE INDEX [F0618_CUSTOM]
ON [JDE_PRODUCTION].[PRODDTA].[F0618] ([YTDWK] ASC)
INCLUDE ([YTDGL],[YTAN8], [YTPALF], [YTPRTR], [YTCO], [YTMCU], [YTOBJ], [YTSUB], [YTSBL], [YTSBLT], [YTPHRW], [YTPCUN], [YTPPRT], [YTBHRT],
[YTSHRT], [YTSHD], [YTPAYM], [YTGPA], [YTPDBA], [YTPB], [YTUAMT], [YTAID], [YTCMTH], [YTCRCD], [YTCRDC], [YTAUBP])

You can see from the index that all I did was create an index over the field that the SQL was sorting on (YTDWK or Date of Work) and included the rest of the fields that were in the query.

There must be other little tricks with indexes in EnterpriseOne. Have you found any?

 

Restore SQL Server Database On A Schedule

Keep Calm and AutomateOne of our companies has a training environment in addition to the standard PS,DV, PY & PD environments. They have been working very hard refining their Procure-To-Process (P2P). Now that they know how they would like to handle this process, they need to train the users. That’s where the P2P Training (PTR) environment comes in.

We setup the environment and have had everything running smoothly for the last few months. However, last week we received a request to backup the data in that environment and then restore that backup copy every morning for the next 2 weeks.

It’s a really simple process, but kind of a pain that I had to manually do the restore.

Well, if you’ve been reading this blog for very long, you know that I can’t stand doing things more than once. A trained monkey, I am not. If it can be automated, I do it and it usually makes sense to do so. This was definitely one of those times.

What I ended up doing was making 2 files:

  1. A bat file to call the sql file
    echo off
    echo. >> restore.log
    echo %date% %time% >> restore.log
    sqlcmd -E -d JDE_PTR -i h:\restore.sql >> restore.log
    echo %date% %time% >> restore.log
    
    
  2. The sql file
    USE master
    GO
    
    ALTER DATABASE TestDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    GO
    
    RESTORE DATABASE TestDB FROM DISK=N'H:\TestDB.bak' WITH  FILE = 1, NOUNLOAD,  REPLACE,  STATS = 10
    GO
    
    ALTER DATABASE TestDB SET MULTI_USER
    GO
    

Then, I just added a scheduled task that fired off the bat file every morning.

Note: When creating the scheduled task, you will need to fill in a value for the “Start in” option. I’m not sure why but it only works if you fill that in. I was using Windows Server 2008 R2 so your mileage may vary.

Scheduled Task Start In Option

How To Move SQL Server Data and Transaction Log Files

I’ve been migrating data from an IBM AS/400 to a SQL Server a lot lately. Within the next year or so we should have completed migrating 3 more instances. Then, we will have all 7 instances on SQL Server.

In preparing for the final data migration of one of the instances, I had to move the data and transaction log files. I found a great article for this on MSSQLTips.

Below are the basic steps.

  1. Use the following commands to get the names and location of the data and transaction log files
    USE AdventureWorks
    GO
    
    sp_helpfile
    GO
    
  2. Set the database to single user mode and detach it
    Use MASTER
    GO
    
    -- Set database to single user mode
    ALTER DATABASE adventureWorks
    SET SINGLE_USER
    GO
    
    -- Detach the database
    sp_detach_db 'AdventureWorks'
    GO
    
  3. Move the files to their new location
  4. Re-attach the database with the files at their new location
    USE master
    GO
    
    -- Now Attach the database
    sp_attach_DB 'AdventureWorks',
    'D:\Program Files\Microsoft SQL Server\MSSQL\Data\AdventureWorks_Data.mdf',
    'E:\Move LogFile here through T-SQL\AdventureWorks_Log.ldf'
    GO
    

EnterpriseOne UBE Runtime Performance Audit

Yesterday, I saw a post by Shannon Moir called “Nice UBE / Batch Performance Summary“. It was great! It provided some the information needed to really evaluate the runtimes of your UBEs.

Shannon is a wiz at SQL on an Oracle database. Unfortunately for me, we don’t use Oracle databases. So, I converted Shannon’s SQL so that I can use it on SQL Server.

SELECT JCPID as UBE, 
	JCVERS as [Version], 
	ltrim(rtrim(simd)) + ': ' + ltrim(rtrim(vrjd)) as UBE_Name, 
	count(1) as [Count],
	Avg(datediff(s,jcstdtim,jcetdtim)) AS Avg_Time,
	min(datediff(s,jcstdtim,jcetdtim)) AS Min_Time, 
	max(datediff(s,jcstdtim,jcetdtim)) AS Max_Time
from svm910.f986114,ol910.f9860, [JDE_PD910].pd910.f983051
where ltrim(rtrim(jcpid)) = ltrim(rtrim(siobnm)) 
	and ltrim(rtrim(jcvers)) = ltrim(rtrim(vrvers)) 
	and ltrim(rtrim(jcpid)) = ltrim(rtrim(vrpid))
	and JCETDTIM > '01-01-2015'
	and JCETDTIM 

Thanks again, Shannon!

Underscore: The Other SQL Wildcard Used For Single Characters

Well, somewhere in the many years that I have been working with and writing SQL statements I missed something very useful that can be used as a wildcard for a single character: the underscore (_).

Yep, thats right. The underscore can be used in a where like clause as a wildcard for a single character.

It works like so:

 SELECT * FROM OL910.F9860 WHERE SIOBNM LIKE '_55%'

Ranges can also be specified in a where like clause.

Wildcard character Description Example
% Any string of zero or more characters. WHERE title LIKE ‘%computer%’ finds all book titles with the word ‘computer’ anywhere in the book title.
_ (underscore) Any single character. WHERE au_fname LIKE ‘_ean’ finds all four-letter first names that end with ean (Dean, Sean, and so on).
[ ] Any single character within the specified range ([a-f]) or set ([abcdef]). WHERE au_lname LIKE ‘[C-P]arsen’ finds author last names ending with arsen and starting with any single character between C and P, for example Carsen, Larsen, Karsen, and so on. In range searches, the characters included in the range may vary depending on the sorting rules of the collation.
[^] Any single character not within the specified range ([^a-f]) or set ([^abcdef]). WHERE au_lname LIKE ‘de[^l]%’ all author last names starting with de and where the following letter is not l.

source: msdn

UPDATE: 5-28-2015

What if our “where clause” string contains an underscore?

 SELECT * FROM TABLE WHERE COLUMN LIKE 'some[_]string%'