Pages

Wednesday, December 11, 2013

MobileIron: Resolving "MobileIron iOS App Multi-Tasking Is Disabled" error

This one took a few minutes to resolve so in order to save anyone else some time here is what you'll see in the MobileIron control panel;

MobileIron: Users and Devices
The warning is;

MobileIron iOS App Multitasking is Disabled
Verify Location Services is enabled on the device for the MobileIron app by going to Settings | Location Services, then launch the MobileIron app once.

If you have email notifications for your users turned on they'll have received a similar email message in their local language.

Unfortunately if you've been an iOS for quite some time the title of the message will confuse you - you can't switch of Multitasking anymore. As is probably the case for lots of people (like me) you'll reach for Google and not read the rest of the message which explains the problem and lets you know how to fix it.

The issue is, basically, that the MobileIron applicaiton cannot access Location Services. This will either be because Location Services has been compeltely turned off on the device or that the MobileIron application has been specifically denied access to it.

The first step to resolving it is to open the "Settings" application;

iOS7: Settings Application
 Touch the "Privacy" item on the left (at the bottom of the "General" group;

iOS7: Settings > Privacy
In this page you'll have details of how much information you are sharing. Touch "Location Services" at the top;
iOS7: Settings > Privacy > Location Services
As you can see from this screen shot MobileIron is explicitly denied access to Location Services. If you switch this back to green then the warning will go away.






Designing and Building your CMDB, Part 2: Test, Test, and Test Again

With apologies for the large break between these two parts, a lot of project work has eaten up all my time and it's not been possible to post as often as I'd have liked!

So the questions that were left at the end of Part 1 is; how do we know we've correctly identified everything related to our system for the CMDB? The answer is, because we didn't write the software and so can't claim to fully understand it, that we don't. Now I'll explain why that doesn't matter as much as you'd think it would.

As with most things in business it's not about a 100% guarantee  - it's about doing exactly the right amount of work to minimising the risk of impact to the business from the "unknown unknowns" (those things we don't know we don't know). In order to work out the ways we've eliminated the risk we need to go back to the reason we're doing this; to take the knowledge out of people's heads and put it into a system so it can be shared. I don't need to create a CI for every item in a config file - the CMDB needs to know where the file is and what it's configuring. The rest is up to the person looking at it and the problem/ incident they're dealing with.

In order to test out that our CI's will meet our needs we need to look at how we are expecting them to be used after we've created them. Here are a few examples of the scenarios we might like to test our configuration with;
  • A user X who has just joined the company needs to be added to the users list (1)
  • User X has changed role and no longer needs access (2)
  • User X needs access to the Admin Interface (3)
  • User X can't access the system (4)
  • User can access parts of the system but isn't seeing any maps (5)
  • Microsoft has released a patch for a critical vulnerability in IIS  and Engineer Y needs to find all the boxes with IIS installed so he can patch them manually (6)
  • Emails from "System X" don't seem to be being sent (7)
As you could see this list could go on forever but, as I pointed out earlier, we're not trying to capture every possible thing that could happen - we're just trying to cover the 90% of things that are most likely to occur and a few other things we (as developers) might like to worry about.

Now that we've got out list let's go through and see whether we have enough information so that someone who isn't familiar with the system but has access to the CI structure can solve the issues we've highlighted;
  1. Looking at the CI list we have the CI "UK InfoMaps Standard Users" and "FR LocalMaps Standard Users" so if the new user joins in the UK we add them to the former, in France we add them to the latter
  2. As 1, except rather than adding them we just remove them
  3. Again we have two active directory groups "UK InfoMaps Administrators" and "FR LocalMaps Administrators" so we can add them to the right group depending on the Country
  4. The first non-straight forward one! We have the two DNS entries, the person taking the call can quickly test these and see if the service is down for everyone or just the user, if it'd down for everyone is the box down? Is the Hyper-V host down? Is the database accessible? Is there an error message? In short there are lots of things to try - and with access to the CI list you can start to do clever things like look for other services using the same Hyper-V host - if they're running then the problem probably isn't with the entire host, etc
  5. Two DNS entries provide some points for testing - is Google down (it has been known ...)? Has the firewall changed so the ports are blocked?
  6. IIS is linked to WIN005 so it should just be a quick case of searching the CMDB for IIS and seeing which boxes have IIS components on them
  7. Is the SMTP server accessible? Is the user account locked?
As you can see there is a lot here that can be done with relatively little technical experience and (trust me as a developer - this bit is key!) *if* the incident eventually gets escalated to a software engineer to look at then there is going to be a lot more information in the call so, rather than having to chase people and get answers to simple questions like "what box is it on?", a lot of that information will already be in the call because whoever answered the phone will have already done most of that work. The key here is what do you want your software engineers doing - chasing users for answers or fixing issues and then getting on with doing other work?

The next part of this series (which will hopefully not take so long to put together) will continue this example and look at metrics and the things you might like to consider doing to keep your CMDB up-to-date and relevant as your business changes.

Windows Phone 8: Turning Off Data Roaming

Whilst on many personal mobile packages it's possible (and relatively cheap) to buy international data roaming (specifically within the European Union) there will always be times when you want to make sure you are not roaming for data.

On Windows Phone 8 it's relatively easy, touch the "settings" icon;
Windows Mobile 8: Settings Application
Then scroll down until you see "mobile network" and touch that;
Windows Mobile 8: Settings
The second option, after your mobile network, is titled "Data Connection" and is typically set to "on".

Beneath that is a "Data roaming options" drop down that can either be set to "roam" or "don't roam";
Windows 8 Mobile: Roaming Settings

If you change the option then the explanation text beneath it changes;

(roam) - "Depending on your service agreement, you may incur extra changes when using data roaming"
(don't roam) - "When entering a roaming area, your data connection will be turned off"

If you don't want roaming charges to appear on your bill (for data) then select "don't roam" in the drop down. This takes effect immediately.

Tuesday, November 19, 2013

Configuring MobileIron on Windows Phone 8 (Nokia Lumia 925)

This blog post is a quick guide to configuring MobileIron on Windows Mobile 8.0. The software is integrated into the OS and therefore you don't need to install anything from the Windows App Store (like you do with iOS and Android).

To start you need to click on the "settings" icon from either your start screen on the Application List;

Windows Mobile 8.0 "Settings" Application
Once you open this application you're presented with an array of text options;

Windows Mobile 8.0 "Settings" Application (Opened)
Scroll down through list until you find "Company Apps", touch that;

Windows Mobile 8.0 Settings > Company Apps
It's actually quite good to see these warnings. Although I did laugh a bit at the "What's a company policy?" being a hyperlink ... Touch "add account" to get started;

Windows Mobile 8.0 Settings - Company Apps - Add Account
You are now presented with two options; your email address and your password. Once you've entered these touch on "Sign in" and Microsoft will attempt to work out what configuration your IT Department/ Service Provider have put into place for you. I'm not 100% sure what this is doing - but for me anyway this didn't work. After a minute or two I was presented with a slightly more detailed option screen;

Windows Mobile 8.0 Settings - Company Apps - Add Account (More Detail)

The three new options are Username, Domain, and Server. For MobileIron (for my instance of it anyway) it wasn't necessary to enter the username and domain just the server.

Once that's done just touch "sign in".

And that was it, the Phone is now in the hands of your company administrators. In my case this meant the configuring of an Exchange account.

I had a lot of trouble getting this working. A lot of trouble, but it's not clear where the problem lay with this. It would be easy to blame the phone (and certainly the one-error-message-fits-all approach wasn't particularly helpful - i.e. can you find the server? is the login incorrect?) but I can't be 100% certain. I will say though that I've never had this problem on an iPad or an iPhone but that could just be down to luck ... Let me know if this works for you in the comments, or if it doesn't!!

Once you've configured "company apps" you'll see the familiar Apps@Work icon in the your installed application list.

Friday, August 16, 2013

Designing and Building your CMDB, Part 1: From System Description To Configuration Items

This series of posts is going to be a slight departure from normal in that it won't be showing you any code. We are going through the process of designing a CMDB (that's a Configuration Management Database) to hold details for all the systems (500+) that we administer. The point of this post is to, by means of an example, show you the sort of questions you should be asking yourself when you put together a CMDB.

So let's start with a description of a system;

"System X is a fairly simple VB.NET solution deployed using IIS and installed on the server WIN005. It consists of two  applications; User Interface (an application open to all users), and Admin Interface (only available to a few).

The Admin Interface works on a vanilla IIS install but the User Interface requires the installation of  the Visual Studio 2010 Tools for Office Runtime.

The installation files for the software are located on WIN080\Software (a file share) as are the bi-monthly patch files that are applied.

At the back end the database, SYSXDB, is SQL Server 2008R2 and is held on a SQL Server cluster called SQLC001.

The application uses Active Directory for authentication, and the User Interface renders some information  from Google Maps to which it requires access.

The users of the Solution are spread across two Countries; France and the United Kingdom. We have internally configured the system so that in the UK users know the solution as as 'InfoMaps' and in France it's known as 'LocalMaps'."

I'm sure there are probably parts in that description that you'll recognise from systems you've worked on. As you can see despite it being only a fairly simple VB.Net website with a couple of plugins there is already quite a lot of information here to capture in our CMDB. If we take this structure and put it into Visio then as a system overview we get something like this;

System X: An Overview
Now for most small organisations this is probably 95% of the information they're ever going to need. If you're a small company and aren't expecting do significantly increase in size and you're not planning on managing hundreds of systems across the globe then you can make do - let's be honest we all have our "go to guy" for a particular system and so long as they're not on holiday (or haven't left!) then they can keep the system ticking over quite happily from both the users and managements perspecitive.

The problem comes when you don't just have one system, or a few, you start to have tens of systems like this and each system takes some time to administer. Suddenly your team of 3/4 software engineers don't really have any time to do anything new because they're too busy keeping the systems that the business is already relying on working to put in anything new.

Once you approach this level you need to significantly increase the quality of information you are holding on each system; you stop needing "Bob" to fix your problem but instead you need "someone who knows IIS" or "someone who can fix SQL Server". If all the knowledge is in Bob's head then Bob quickly becomes a bottle-neck through which all issues have to go through - this isn't good for Bob (although he might think in the short term that it is!) and it's certainly not good for the company or the users.

So let's go back to the description for System X and look for all the items in the configuration that we might want to store information on in our CMDB. Each of these items will become a Configuration Items (CI) in the CMDB. It's fairly easy looking at the system description to just pick things out;
  • IIS
  • WIN005
  • User Interface
  • Admin Interface
  • Visual Studio 2010 Tools for Office Runtime
  • WIN080\Software
  • SYSXDB
  • SQLC001
  • Active Directory
  • maps.google.com
This is a fairly long list, but is only part of the story. We (as IT Professionals) then need to take this list and add in the non-obvious things that will help us troubleshoot the system when there's a problem six months after it's gone live and we've all moved on to other projects. Again there is no easy way to do this and you're heavily reliant on vendors to provide "full and complete" information.

The sort of questions that need to be picked out from the system description are; have both applications been installed into the same Application Pool in IIS? Is the Application Pool running as a local user or is it using network credentials? How are we connecting to the database? Are users typing in http://win005 to access the site or have we setup DNS entries (http://infomaps for example)? How are we deciding if a user has access to the Admin Interface? Etc.

So let's assume someone technical has gone through the system, had the discussions with the vendor, and found out how everything is not just connected but configured. Here's the list of things we might like to consider turning into CI's in additional the ones we've already identified;
  • Application Pool: SystemXUserInterface (Installed on WIN005)
  • Application Pool: SystemXAdminInterface (Installed on WIN005)
  • SYSTEMXSERVER (Active Directory account Used by both Application Pools and SQLC001 to grant access to SYSXDB)
  • "UK InfoMaps Standard Users" (Active Directory Group, Used By "System X User Interface")
  • "FR LocalMaps Standard Users " (Active Directory Group, Used by "System X User Interface")
  • "UK InfoMaps Administrators" (Active Directory Group, Used By "System X User Interface")
  • "FR LocalMaps Administrators" (Active Directory Group, Used by "System X User Interface")
  • DNS Entry: LocalMaps.ourcompany.org (Maps to WIN005)
  • DNS Entry: InfoMaps.ourcompany.org (Maps to WIN005)
  • SMTP.ourcompany.org (Used by System X Admin Interface to send email notifications)
  • Firewall Ports: 80,443 (Required for access to WIN005)
  • VT001 (Hyper-V server hosting WIN005 - a virtual server)
Now this list is looking a little more comprehensive!

But how do we know we've captured everything? or even captured enough details for us to be able to properly support the system after we've put it in?

In Part 2 we'll look at "testing" our configuration to try and identify the gaps.

Tuesday, August 13, 2013

PL/SQL: Dynamically Building Your Data Archive

The purpose of this blog post is just to outline a design I put together as part of an internal project for dynamically building a data archive using rules based on the source data being fed into the system. It's far from complete but I think it highlights an interesting way of building an archive for your data when you don't know when you're doing the designing exactly what data you will be putting into it.

THE PROBLEM
At the moment in order to put data from various sources into the data archive a multitude of different loading programs are used (SSIS, command-line applications, scripts, etc) each of which uses it's own rules to determine where the source data ends up (largely dependent on what rules the developer used when putting it together) and inter-dependencies are largely invisible.

New feeds are added at a rate of one every other month and the system should cope with this wile keeping track of the dependencies in the database.

DESIGNING THE SOLUTION
In essence the problem this solution is trying to solve is to provide a single point of entry into the data archive where you can put your source data and which will then be put into the archive using a pre-specified set of rules to determine where the data ends up and what format it's in.

A simple diagram for the system is;
System Diagram
The specific bit that is "in scope" for this work is the "LOAD Process". How data gets into the DATASOURCE tables is really dependent on where the data is coming from, what format it's in, etc and it's practically impossible to write something so universally generic to cover every possible option from a CSV text file to a database link.

The aim of the solution will be to process the data as it arrives but it's possible that it could be adapted to work with data in batches.

THE PROPOSAL
I've created a fairly simple structure using the tables;
  • SOURCEDATATYPE - This holds a unique reference and description for each different data source
  • STAGINGOUTPUT - This table holds the raw data as loaded into the database from the external feed (I went with this name in case it's necessary to use staging tables for the IMPORT process to manipulate the data prior to it being loaded via the LOAD process)
  • ENTITY - This is the name for a table that is being created as part of the LOAD process in the Data Archive.
  • ENTITYDETAIL - This table contains information on how the data from the source table should be manipulated before being moved into the ENTITY table.
Here's a simple data structure;
Database Structure
As you can see it's pretty self explanatory.

Once you've configured the data source type, and entity details then you're ready to start loading data.

In order to load the database I've created a package called DW_LOADDATA. This has two routines;
  • ProcessAll, and
  • ProcessRow (p_rowID ROWID)
Basically "ProcessAll" loops through the unprocessed rows and passes them one at a time to the "processRow" routine.

The process row routine performs the following steps;
  • Get the new record from STAGINGOUTPUT
  • Identify the ENTITY/ENTITYDETAIL for the feed specified in the STAGINGOUTPUT record
  • Check to see if the ENTITY exists - if not create it.
  • Work out the column name, and if that doesn't exist as part of the ENTITY create it
  • Does a value already exist? If so update it (using MERGE), otherwise INSERT the new value
  • Mark the STAGINGOUTPUT record as processed
Sounds simple? Well it's less than 150 lines of code include comments and formatting ...

The key is the information in the ENTITY/ENTITYDETAIL tables. For example let's suppose I'm loading sales data and I want to create an ENTITY called SUPPLIER_SALES_BY_MONTH with separate columns for each month of data.

In the ENTITY table I'd create a simple record with the name of the new ENTITY (bearing in mind the actual name of the table will be prefixed with the Short_Code from the SOURCEDATATYPE table) and then in the ENTITYDETAIL table create the following rows;

INSERT INTO ENTITYDETAIL
SELECT 1, 1, 2,
  '''PERIOD_'' || TO_CHAR(SO.DATE01, ''YYYYMM'')', -- column_name_expression
  'SO.NUMBER01', -- row_unique_expression
  'OLD.VALUE = NVL(OLD.VALUE, 0) + SO.NUMBER04', -- value_expression
  'NUMBER', -- on_create_type
  '0' -- on_create_default
FROM DUAL
UNION SELECT 1, 1, 1,
  '''SUPPLIER_NAME''', -- column_name_expression
  'SO.NUMBER01', -- row_unique_expression
  'OLD.VALUE = SO.TEXT01', -- value_expression
  'VARCHAR2(80)', -- on_create_type
  '0' -- on_create_default
FROM DUAL


I know "INSERT INTO ..." probably isn't the best way to do this but this is only an example!

As you can see the column_name_expression is looking at the SO (STAGINGOUTPUT) table and formatting the first date to YYYYMM - so a value of 13-JAN-2013 will create/ update the column PERIOD_201301.

The value (for the supplier) is being updated to add on the sales for that month.

The second column that's created is the SUPPLIER_NAME - this is simply the name of the supplier. If I run this using some random test data I end up with a table that looks like;
Generated Table
I've created a script which creates the objects and loads some simple test data. It's available here (via Google Drive - DO NOT RUN IT IN AN EXISTING DATABASE SCHEMA UNLESS YOU WANT OBJECTS STARTING WITH SAL_ TO BE DROPPED!). You'll need to have setup a user with default tablespace permissions in order to get the script to work.

Let me know in the comments if you find this useful

Friday, June 21, 2013

Testing SMTP Connections Using Telnet



So here’s a quick list of commands that will test an SMTP connection. The first thing to do is to make sure that you have used “Run as” to start the command window. Then type;

telnet [server name] 25

helo  [server name]

mail from:

rcpt to:

data

Some sort of random text you want to see in the email body …

.

Here’s the test output;
Telnet/SMTP Sample Output
 The most common errors you'll get a relay failure - if you want to fix this you just need to make sure the "from" email address is internal to the organisation hosting the SMTP server (for example use Gmail accounts if using Google's SMTP server, etc).

Thursday, May 16, 2013

SSRS: Searching The Reporting Database - Which Reports Include Subreport XXX?

I've been tasked with splitting several existing reports into two (one for one set of users, one for a different set) and while I was looking at using Linked Reports unfortunately the software program that actually does the pushing out of the reports to the end-users doesn't support Linked Reports.

There also doesn't seem to be a "Dependencies" link which would allow me to see what reports are dependent on the Sub report I've been asked to change.

Digging through various SQL examples that are out there there didn't seem to be anything to do exactly what I was after *without* making it unnecessarily complicated.

Here's the SQL I ended up with;

SELECT *
  FROM (SELECT *,
               CASE
                 WHEN LEFT(CONVERT(varbinary(max),Content),3) = 0xEFBBBF
                   THEN CONVERT(varbinary(max),
                                SUBSTRING(CONVERT(varbinary(max), Content),
                                          4,
                                          LEN(CONVERT(varbinary(max), Content))
                                         )
                                )
               ELSE
                 CONVERT(varbinary(max),Content)
               END AS ContentXML
  FROM Catalog C) AS C
 WHERE C.ContentXML LIKE '%Subreport%'
   AND C.ContentXML LIKE '%SUB_REPORT_NAME%'
   AND C.Path LIKE '/SUB_REPORT_FOLDER/%'


The point of including the sub-report folder is to only pick up items in a single folder (or sub-folder) as we have PROD, DEV, and TEST all on the same server (in different folders).

Hope this saves you the time it took me sorting it out!

Wednesday, May 1, 2013

Excel 2013: Getting Data From Parametized SQL Query (vs SQL Server)

I would have thought that dragging in data from SQL Server into Excel (both Microsoft products) would be easy - and it is if you're looking to drag in entire tables, views, etc. But if you want to do something a little more complicated with parameters it becomes a lot harder and less intuitive to get it to work.

The example below shows how to get the ExecutionLogs from a SQL Server instance between two dates.
 
I'm going to use Excel 2013 as it's the latest version and the one I have to hand.

Create a blank workbook by selecting "Blank workbook" (which is usually the first option in the middle of the screen);
Excel 2013: New "Blank workbook" Tile
Select the "Data" page in the ribbon and then click on "From Other Sources" in the "Get External Data" part of the ribbon (on the left). Select "From Microsoft Query" (which should be the very bottom option);

Excel 2013: Data Page
 NOTE: you may think selecting "SQL Server" is a slightly more obvious choice. However this will not allow you to use parametrized SQL - it's just for direct export from tables or views (why that's the case if beyond me!).

This will then open the "Choose Data Source" dialog;

Excel 2013: Choose Data Source Dialog
This dialog clearly dates from an earlier version of Windows and it's difficult to see how Microsoft couldn't have "updated" this with the rest of the 2013 look-and-feel. I'm running Windows 7 but I have have a sneaking suspicion that everyone from Windows XP onwards will be familiar with this dialog (although possibly not with the addition of "OLAP Cubes").

This dialog also isn't part of Excel, it's a separate application. Sure Microsoft will score some marks for re-use of a standard Windows component but the change in interface is jarring to say the least ... and it gets worse.

Leave "New Data Source" highlighted and click "OK";

Excel 2013: Create New Data Source Dialog
We seem to have slipped back to a pre-wizard era and we now have fields labelled 1 to 4. When we complete field 1, field 2 becomes available, on completing field 2 field 3 becomes available. This is jarring different from the other dialogs within Excel 2013.

Anyway populate fields and 2 in the dialog, selecting "SQL Server" from the drop down (in mine it was at the very bottom). Then click "Connect ...";

Excel 2013: SQL Server Login
Enter the login information - "Use Trusted Connection" means use your already authenticated (Active Directory) credentials - once you've entered a Server the "Options" box at the bottom right will become available, click on it;

Excel 2013: SQL Server Login Dialog - Extended
Use the "Database" drop down to select the database you wish to connect to. If you leave it as default it will pick the default database for your database user.

Click "OK".

Click "OK" again (on the "Create Data Source" dialog) - do not pick a table in the bottom drop down, we're going to use SQL with parameters.

The data source you just created should be select (in the "Choose Data Source" dialog) so just click "OK".

You will then be presented with the "Query Wizard - Choose Columns" dialog;

Excel 2013: Query Wizard - Choose Columns
Now you'll notice that you can't do anything from this stage *except* select a table.

Click "Cancel" (at the bottom right);

Excel 2013: Microsoft Query Confirmation Dialog
 Click "Yes";

Excel 2013: Add Tables Dialog
We're not working with tables so click "Close";

Excel 2013: Microsoft Query
Click on the "SQL" button on the menu bar;

Excel 2013: Microsoft Query - SQL Dialog
Here is the SQL we are going to use;

SELECT
  EL.InstanceName,
  EL.ItemPath,
  EL.UserName,
  EL.ExecutionId,
  EL.RequestType,
  EL.Format,
  EL.Parameters,
  EL.ItemAction,
  EL.TimeStart,
  EL.TimeEnd,
  EL.TimeDataRetrieval,
  EL.TimeProcessing,
  EL.TimeRendering,
  EL.Source,
  EL.Status,
  EL.ByteCount,  EL.AdditionalInfo
FROM ExecutionLog3 EL
WHERE EL.TimeStart >= ["Min Start Date"]
AND EL.TimeStart < ["Max Start Date"]
ORDER BY EL.TimeStart DESC


Enter the SQL and click "OK".

NOTE: There are a couple of "gotchas" here. The SQL is processed prior to being run and it isn't particularly flexible. If you use ANSI SQL (JOIN ... ON ...) then you won't get the graphical interface at best, or it just won't work. Equally including square-backets [] seems to break the SQL, as does having "dbo." in front of the table name.

"Broken" SQL is usually identified by you being told that the SQL doesn't support the graphical interface. This is usually a prelude to a more obscure error.

Providing everything is working OK you'll see;

Microsoft Query: Sample Data
Click on the "Exit" button (fourth from the left at the top left).

This closes Microsoft Query and returns control to Excel. The "Import Data" dialog will now appear;

Excel 2013: Import Data Dialog
Change "=$A$1" to "=$A$4" (so we have a few lines for the parameter entry boxes) and click "OK";

Enter "Start Date" in A1, and "End Date" into A2 (just labels). And then two dates into B1 and B2 (these will be the from/to dates we run the report as);

Excel 2013: Parameter Values in Excel
Now we need to link up the cells we've used with the parameters in use in our query. Click on the "Data" tab in the ribbon and then "Connections";

Excel 2013: Connections
Select the connection and then click "Properties";

Excel 2013: Connection Properties
Click on the "Definition" tab;

Excel 2013: Connection Properties - Definition Tab
Click on the "Parameters" button at the bottom (if you have used the SQL Server option in Excel this is where you'd have the problem - "Parameters" would be permanently greyed out);

Excel 2013: Parameters
As you can see in the list there are two parameters, the two we created earlier in the SQL. Both are currently set to prompt us for values. Click on the "Get the value from the following cell" radio group and select the cell we have entered the Start Date in;

Excel 2013: Default Parameter Value
You can also check the "refresh automatically when cell value changes" box if you want to work that way.

Repeat the process with the Max Start Date Parameter.

Click "OK" (closed Parameters dialog)

Click "OK" (closes Connection Properties dialog)

Click "Close" (closes Workbook Connections dialog)

Click "Refresh all" (in the ribbon)

And we're done! If this was useful for you don't forget to leave a comment ...