Pages

Monday, July 30, 2012

Noetix: Working With Item Cost Summaries

If you are, like us, users of Oracle Process Manufacturing then you'll have written a lot of reports which rely on Item Costings (both for products and ingredients). One of the things that makes the current structure of the Noetix View difficult to work with is that it provides a detailed breakdown of the costs - something that's useful if you're reporting on the costs themselves - which is not useful if you're reporting on, for example, the value of inventory where you're just interested in the *total* cost.

Of course it's fairly easy to get there, you just total up all the costs but what it does do is introduce unnecessary complexity in your SQL often leading to sub-selects in JOINS like the following;

select 
  gic.ITEM$Item,
  sum(decode(gic.Component_Group, 'LAB', gic.Item_Cost)) as TOTAL_LABOUR_COST,
  sum(decode(gic.Component_Group, 'OVE', gic.Item_Cost)) as TOTAL_OVERHEAD_COST,
  sum(decode(gic.Component_Group, 'MAT', gic.Item_Cost)) as TOTAL_MATERIAL_COST
from gmfg0_item_costs gic
where gic.PERIOD_CODE = :PERIODCODE
group by gic.ITEM$Item

As you can see we have three component groups LABour, OVErhead, and MATerial and while this is a fairly understandable piece of SQL what happens is that it gets repeated over and over again in various reports and when, as recently, we change the way costing works (by moving costs from one Organsiation to anotehr) it becomes very complex to make the change to all the affected reports.

Because of this we've introduced a summary view which sits on top of GMFG0_Item_Costs called GMFG0_Item_Cost_Summary which looks like this;

Name                Type         Nullable Default Comments
------------------- ------------ -------- ------- --------
ORGANIZATION_CODE   VARCHAR2(40) Y                        
PERIOD_CODE         VARCHAR2(10)                          
Z$INVG0_ITEMS       ROWID        Y                        
ITEM$ITEM           VARCHAR2(40) Y                        
TOTAL_LABOUR_COST   NUMBER       Y                        
TOTAL_OVERHEAD_COST NUMBER       Y                        
TOTAL_MATERIAL_COST NUMBER       Y                        
TOTAL_ITEM_COST     NUMBER       Y  


The SQL to generate it is;

create or replace view gmfg0_item_cost_summary as
select gic.Organization_Code,
       gic.period_code,
       gic.Z$INVG0_Items,
       gic.ITEM$Item,
       sum(decode(gic.Component_Group, 'LAB', gic.Item_Cost)) as TOTAL_LABOUR_COST,
       sum(decode(gic.Component_Group, 'OVE', gic.Item_Cost)) as TOTAL_OVERHEAD_COST,
       sum(decode(gic.Component_Group, 'MAT', gic.Item_Cost)) as TOTAL_MATERIAL_COST,
       sum(gic.Item_Cost) as TOTAL_ITEM_COST
  from gmfg0_item_costs gic
 group by gic.Organization_Code,
          gic.period_code,
          gic.Z$INVG0_Items,
          gic.ITEM$Item;

To have this view rebuilt every time we do a Noetix regenerate we have created a XU6 script to automatically build the view after every regenerate - it's available here (via Google Docs).


Friday, July 27, 2012

Generating a "My Reports" In SQL Server Reporting Services (SharePoint Integrated Mode)

The idea behind this report is that most, if not all, of our users will need to keep running the same reports over and over again every month and so rather than coming up with some fantastic way of categorising the reports why not just write a report that looks at the logs and see which reports the user has run and how often and provide the user with this list to pick the report from?

Here's the screen capture of the final report;

My Reports - Sample Output

As you can see we took the opportunity to provide a method for users to provide suggests to correct the (laughably inaccurate) descriptions we hold of the reports into something a lot more meaningful for what the report does for them.

Now the next thing to say is that Microsoft does not support direct access to tables in either SharePoint or Reporting Services EXCEPT for SSRS access to the ExecutionLog* views. It also helps if, like us, you have consolidated your SQL Servers so that same server hosts both SharePoint's WSS_Content_ database AND the ReportServer database for SSRS.

Here is the SQL;

SELECT A."Report Name",
       AUD.ntext2          "Description",
       AUD.nvarchar12      "Status",
       AUD.tp_Version      "Version",
       A."Execution Count"
  FROM (SELECT C.NAME "Report Name", COUNT(*) "Execution Count"
          FROM ReportServer.dbo.Catalog C
          JOIN ReportServer.dbo.ExecutionLogStorage ELS
            ON C.ItemID = ELS.ReportID
         WHERE 1 = 1
           AND ELS.UserName = @Username
           AND ELS.Format = 'RPL'
           AND ELS.TimeStart > '01-JAN-2012'
           AND C.Type = 2 -- Report
           AND UPPER(C.Path) LIKE UPPER('%Reports LIVE%')
         GROUP BY NAME
        HAVING COUNT(*) >= @Min_Execution_Count
        ) A
  JOIN WSS_Content_Reporting.dbo.AllUserData AUD
    ON A."Report Name" = AUD.tp_LeafName
   AND AUD.tp_DirName = 'Reports LIVE'
   AND AUD.nvarchar3 = 'rdl'
   AND AUD.tp_IsCurrentVersion = 'True'
 ORDER BY A."Execution Count" DESC
As you can see this SQL joins the data in both the Catalog and ExecutionLogStorage tables in SSRS and the AllUserData table in SharePoint. The Execution details are restricted to just reports (ELS.Format  = 'RPL' and C.Type = 2) and, as our live reports are in a single Document Library in SharePoint, we have also restricted it to where the reports path (C.Path) contains the words "Reports LIVE" (the name of the directory).

Additional the details from SharePoint (AUD.ntext2, AUD.nvarchar12, etc) will vary depending on your configuration. If you don't have additional details stored in SharePoint (like descriptions) you can just drop that part of the SQL.

Finally we have two parameters @Username and @Min_Execution_Count. We use @Username as the user whose report execution history we're interested in and @Min_Execution_Count as a way of restricting the report to only display reports that have been executed over a certain number of times.

We order the results by the Execution Count in descending order so the most frequently run report is at the top.

Now that we've got the dataset set-up (I've called it "My reports By Frequency Used.rsd" and it makes use of a dataset in the "Data Sets" document library) the report which accesses it is available here - please note that in order to use it you need to save it as a normal text file. I have made the following replacements;

  • xxxxx - This is the servername
  • yyyyyy - This is the email address that the people clicking on "[Suggest Update]" will have their messages sent to

SharePoint 2007: Implementing Simple Tagging

This blog post just covers a very simple "tagging" example that I've been using in our SharePoint 2007 Reporting Services server to allow the sharing of reports between different areas. For example in Finance we have a simple report that gets GL Journal Entry lines for a specific Account Code combination that is useful to many of the teams in Finance so they all want access to it. Using tags each team can have it's own view of reports that include the "shared" report.

Let's start with looking at how our current environment is configured. Basically we have a single Document Library (Reports LIVE) which contains all our reports and we've added various additional fields to the SharePoint library such that when you edit the document it looks like this;


Report Properties (SharePoint 2007)
As you can see we had a slight disagreement on how we should work. I championed the "tag everything, categorise nothing" position and sadly had to compromise on "Categorise on Business Process, and Tag Teams Within The Process" - of course we quickly had to turn the categories into a multi-select but otherwise it seems to work just fine (of course I still think I was right - but doesn't every software engineer?!).


The field I'm going to be storing tags in is the "Keyword(s)" field, as you can see it's not mandatory (as sometimes a Process is so distinct that after you've defined the process category then tagging is just irrelevant) and for the report specified above it's actually blank - we're going to change that.


I've picked an Expense report (from Oracle Internet Expenses) deliberately as I know that it is used by two teams; expenses, and accounts payable.  At the moment all the teams in Finance (General Ledger, Purchasing, Inventory, Audit, etc) all have the report visible in their lists.


The first step is to enter some keywords. I'm using ";" as a separator so I've entering "expenses;payables" (note the lack of a space);


Adding Keywords To The Report
Now that the changes have been saved go to the drop down at the top right of the document library and choose "Create View". I'm going to create a view called "Finance: Payables" that will contain any report that has been tagged with "payables";

New View for Finance (Accounts Payable)
Select the columns you'd like to display and then scroll down to the Filter section and choose to show the items when column "Keyword(s)" contains "payables";

Building a View Filter
Now save the new view and you should see the report you've tagged. To properly implement this you then need to create a new view to pick up the reports tagged with "expenses" - then you'll see the same report appearing in both views but not, if you continue the example further and create views for other groups, in those views.

Of course you're totally reliant on everyone spelling (and typing!) everything correctly - something that's far from guaranteed - but it does give you a way of dynamically categorising your items and sharing them between different groups. Something using folders won't let you do.












Thursday, July 5, 2012

PL/SQL: Oracle Date/Time Calculations

This is a very simple blog which is just a list of Oracle Date/Time calculations (i.e. every two hours, every day, 8am every day, etc). These formulas are intended to help you calculate the "next" date for a given requirement.

Midnight;
TRUNC(SYSDATE)+1

Tomorrow at 7am;
TRUNC(SYSDATE)+1 + 7/24

Every two hours (on the hour);
TRUNC(SYSDATE, 'HH') + 2/24

Every two hours (on the hour) between 8am and 6pm every day;
CASE WHEN TRUNC(SYSDATE, 'HH') + 2/24 <= TRUNC(SYSDATE) + 19/24 THEN TRUNC(SYSDATE, 'HH') + 2/24 ELSE TRUNC(SYSDATE)+1 + 8/24 END





PL/SQL: Using Oracle DBMS_JOB For Scheduled Tasks

In order to improve performance when reporting we like to pre-build some of the data for the reports. In order to keep this data fresh we have written a simple PL/SQL script that we need to schedule to run daily.

The PL/SQL behind this is simply;

begin
  for v_DateRange in (SELECT TRUNC(SYSDATE) - 7 Start_Date,
                             TRUNC(SYSDATE) End_Date
                        FROM DUAL) loop
    gl_je_sla_update.processdaterange(p_startdate => v_DateRange.Start_Date,
                                      p_enddate   => v_DateRange.End_Date);
  end loop;
end;

This just calls the "ProcessDateRange" procedure in GL_JE_SLA_UPDATE with the date range specified by the query.

One of the key features we're after is that the job should run at 7am every day, here is the script;

begin
  sys.dbms_job.submit(
    job => :job,
    what => 'begin
      for v_DateRange in (SELECT TRUNC(SYSDATE) - 7 Start_Date,
                                 TRUNC(SYSDATE) End_Date
                            FROM DUAL) loop
        gl_je_sla_update.processdaterange(
          p_startdate => v_DateRange.Start_Date,
          p_enddate   => v_DateRange.End_Date);
  end loop;
end;',
      next_date => to_date('05-07-2012 14:38:35', 'dd-mm-yyyy hh24:mi:ss'),
      interval => 'TRUNC(SYSDATE)+1+7/24');
  commit;
end;

Submitting this job (with these values) causes the job to be run immediately as the "next date" will be in the past for you (but that's not usually a problem, but is something you should be aware of). You'll notice that in order to use this PL/SQL you'll need to work with the :job output variable.

A full list of the available options on the DBMS_JOB.Submit procedure is;

Parameters for DBMS_JOB.Submit
 As you can see the only required input parameter is what, next_date will default to "Now", the interval will default to NULL (i.e. do not repeat), no_parse defaults to false (don't not parse - arrrgh!), instance defaults to 0 (the instance you're currently running on), and force defaults to false.

I have never used no_parse, instance (we only have one), and force.

If you view the source on the package header there is some other useful suggestions and explanations.