What Is LookML?
LookML is a language for describing dimensions, aggregates, calculations, and data relationships in a SQL database. Looker uses a model written in LookML to construct SQL queries against a particular database.
LookML Projects
A LookML Project is a collection of model, view, and dashboard files that are typically version controlled together via a Git repository. The model files contain information about which tables to use, and how they should be joined together. The view files contain information about how to calculate information about each table (or across multiple tables if the joins permit them).
LookML separates structure from content, so the query structure (how tables are joined) is independent of the query content (the columns to access, derived fields, aggregate functions to compute, and filtering expressions to apply).
LookML separates content of queries from structure of queries
SQL Queries Generated by Looker
For data analysts, LookML fosters DRY style (“don’t repeat yourself”), meaning you write SQL expressions once, in one place, and Looker uses the code repeatedly to generate ad-hoc SQL queries on the fly. For business users, the end result is the ability to build complex queries in Looker, focusing only on the content they need, not the complexities of SQL structure.
A query built in Looker that joins values from multiple tables (ORDERS ITEMS, ORDERS, PRODUCTS, USERS, and USERS ORDER FACTS)
In the figure above, the end-user doesn’t need to understand SQL expressions for joins or filtering.
A Dependency Language for Describing Data Structures
LookML is a dependency language like make, as opposed to an imperative language like C or Ruby. LookML provides predefined data types and syntax for data modeling. LookML syntax has a structure that is clear and easy to learn. (You don’t need prior experience with programming languages, everything you need to know is documented here.) LookML is independent of particular SQL dialects, and encapsulates SQL expressions to support any SQL implementation.
Code Sample
The example below shows a minimal LookML project for an e-commerce store, which has a model file and two view files:
LookML Is Case-Sensitive
LookML is case-sensitive, so be sure to match the case when referring to LookML elements. Looker will alert you if you have referred to an element that doesn’t exist. In the example below, the developer has incorrectly capitalized “FLIGHTS.” The Looker IDE shows a warning that the e_FLIGHTS_pdt
does not exist. In addition, the IDE suggests the name of an existing Explore, which is e_flights_pdt
:
However, if your project contained both e_FLIGHTS_pdt
and e_flights_pdt
, the Looker IDE would not be able to correct you, so you would have to be sure which version you intended. Generally, it’s a good idea to stick with lowercase when naming LookML objects.
IDE folder names are also case-sensitive. You must match the capitalization of folder names whenever you specify file paths. For example, if you have a folder named Views
, you must use this same capitalization in the include
parameter. Again, the Looker IDE will indicate an error if your capitalization doesn’t match an existing folder in your project:
Overview of Fundamental LookML Elements
The following diagram shows fundamental LookML elements and their relationships. For more detail, see LookML Terms and Concepts.
Using the IDE
Looker’s integrated development environment (IDE) has several features to assist you in writing LookML.
Autosuggest
As you type, the IDE suggests possible parameters and values that are sensitive to the context of what you are typing. For example, the suggestions for a dimension’s type
parameter will only include valid options for that parameter. Also, fields in sql
parameters have to be marked with ${...}
, so the IDE adds that syntax when suggesting fields.
Autosuggest automatically appears wherever it can be shown. To close it, press the escape key on your keyboard. To view it at any point, press Ctrl+Space (Windows) or Control-Space (Mac).
On-the-Fly Error Checking
The IDE catches syntax errors as you type. A red X in the gutter indicates a syntax error, which is underlined in red. When you hover over the red X, a short description of the problem appears.
The LookML Validator is still required to perform a full model validation. Some errors, such as an invalid field reference due to a missing join, require a holistic look at the model and therefore are only surfaced when the LookML Validator is run.
Context-Sensitive Help Panel
On the right side of the IDE is a context-sensitive help panel that dynamically updates based on the position of your cursor. The help panel:
- Provides a description of the LookML parameter
- Shows all possible subparameters or a list of allowed values
- Links to the documentation pages for the parameter and all subparameters
Automatic Formatting
When you are in Development Mode, you use the format file keyboard shortcut to automatically indent your LookML file. The indentation indicates the hierarchy of parameters and subparameters, making your LookML easier to read and understand.
Folding
A small arrow appears in the gutter next to the start of each chunk of LookML. Clicking this arrow folds or unfolds that section of LookML so that you can focus on just particular sections.
You can also use the fold all and unfold all keyboard shortcuts to expand and collapse all LookML sections down to the top-level parameter within the current file.
Marking Additions, Changes, and Deletions
In the LookML IDE, several indicators are always displayed when you are in Development Mode and have uncommitted changes.
- The file’s name is displayed in blue in the left side of the window to indicate that the file is new, changed, or renamed.
- When you open a new or changed file, the line numbers inside the file are highlighted in green to show new or changed lines, and a horizontal red line indicates removed lines.
Adding Comments
You can add comments to your LookML to make it more readable. To add a single comment, use the #
character:
You can comment out an entire block of code using keyboard shortcuts:
- Select the lines you want to comment out
- Press ⌘ + / on a Mac, or ctrl + / on a Windows computer
The IDE will add the #
character to each of your selected lines.
Commenting Out SQL Blocks
If you are commenting out lines that are entirely within a SQL code block, manually add SQL comment notation.
When you use the SQL comment block, the commented-out sections will still be interpreted as LookML code. Therefore, you will see your comments within LookML-generated SQL, but the comments won’t affect the query results.
Finding and Replacing Text in Your Project
The Looker IDE has a find and replace function so that you can search all of your project files for text, and replace all instances or selected instances in a bulk operation.
The Looker IDE find and replace is a great tool for locating LookML elements in large LookML projects and for replacing text in descriptions and labels. If you want to search and replace in your project for the names of models, Explores, and fields, use instead the Content Validator’s find and replace function. Changing the names of models, Explores, and fields can break references within your model, and the Content Validator can validate your LookML references and fix errors in your LookML after making these types of changes.
To use the Looker IDE find function:
- Navigate to the search panel in the IDE.
- Enter a term in the Find field.
- Optionally, you can select Case Sensitive to search only for instances that match the case you entered in the Find field, or select RegEX to perform a regular expression search.
- Click Find.
- Optionally, you can click a file’s name to open that file in the IDE.
If you are in Development Mode, you will also see the Replace With field. To replace text in your project:
- Enter replacement text in the Replace With field.
- Click Select & Replace to preview how your text would be replaced in your project.
The results show all instances in your project highlighted for replacement. You can use the following options to customize which instances are replaced:
- Use the checkbox next to the results summary to select or deselect all instances across your entire project.
- Use the checkbox next to a single instance to select or deselect that instance.
Click the Replace Selected button to apply the replacement to the instances you’ve selected. The IDE will show the changes that were made:
Creating and Testing New Fields
As an example of editing LookML, we’ll add several fields and then test them.
Creating a Dimension
First we will add a new dimension to our users
view that determines if a user is from either California or New York. The dimension will be type: yesno
, which means it will return Yes if the user is from California or New York, and No if not.
The LookML for our new dimension looks like:
Add this dimension to the view file for the users
view. Click Save Changes to save your changes.
Check out substitution operators to learn more about ${TABLE}.state
.
Creating a Measure
Next we will add a new measure to our user
view that averages the age of our users. This measure will be type: average
and aggregate over the column age
.
The LookML for this new measure looks like:
Add this measure to the view file for the user
view. Click Save Changes to save your changes.
Testing the Fields in the Explore
You can test your new dimension and measure by querying them. After you save your changes, these fields will appear in the Field Picker in the Explore. An easy way to access the Explore for the current view is to use the drop-down menu next to the view filename:
Once we are in the users
Explore, we can select the new fields to add them to a query. A query with both the new fields shows the average age of users who are from California or New York, versus users who are not:
You must be in Development Mode to access these new fields until you have committed and pushed your changes to production.
Validating Your LookML
When you are satisfied with your updates, you can save your changes. The IDE will alert you to any unresolved syntax errors within a single file.
Next, use the LookML Validator to perform a full model validation. Some errors, such as an invalid field reference due to a missing join, require a holistic look at the model and therefore are only surfaced when the LookML Validator is run. Be sure to validate your LookML changes before publishing them the production environment. Although validation will not catch every issue, such as database permission issues, it will prevent most errors.
The LookML Validator should not be confused with the Content Validator, which instead checks to see if the changes you’ve made will break any saved Looks.
The LookML Validator scans only LookML files that have been updated since the last LookML validation, or files that are affected by updates:
- If a model-level setting changes, everything is revalidated.
- If a view changes, only the Explores where it is used are revalidated.
- If an Explore changes, only that Explore is revalidated.
Chat Team Tip: If you notice that the LookML validation takes a long time to complete, the first thing to check is the
include
parameter in your model file. If you include all of your view files (include: "*.view"
), the LookML Validator will have to check them all, which can affect performance. If this is the case, update the model file’sinclude
parameter so that only the needed view files are included.
Running Validation
To run the LookML Validator, click the Validate LookML button at the top right of the Looker IDE; or click the Code Validator icon at the top of the IDE to open the code validation panel, and click the Validate LookML button there:
After you run the LookML Validator, you may see a list of errors and other warnings that you should address:
The validator button in the code validation panel will become available again if you make and save another change or fix:
Validation Messages
Looker displays validation messages after running validation on your LookML.
LookML dashboards show informational messages, rather than warnings, in the sidebar when permissive localization is enabled.
No LookML Errors Found
When there are no issues found by the validator, you will receive the No LookML errors found message in green.
LookML Errors
LookML errors are issues that could prevent queries from running. The number in parentheses is the number of errors found (eight errors below):
Within the expanded list of issues you will see the reason why validation didn’t pass. Often times, if you click on the error, it will bring you directly to the problem row of code. You’ll see a red “X” next to the row. Hovering over it will provide more detailed error information in some cases:
Chat Team Tip: The validation error we are asked about most is “Unknown or inaccessible field.” Check out this Help Center article for the causes and what to do about it.
LookML Warnings
LookML warnings may not necessarily prevent a query from being run, but may still result in broken or unintended functionality for your users. As with errors, the number in parentheses is the number of warnings found (six warnings below):
As with LookML errors you can expand any warnings, jump to the problem code by clicking on them (in many cases), and hover over the red “X” icon for more information:
Deploying Your Changes
After you’ve validated that your changes will work properly, you can use Looker’s Git integration to commit and deploy your changes to production.
If you change field names that serve as filters in your Looks or dashboards, be sure to review the Filters section of your scheduled Looks and dashboards and update the filters as needed to reflect your changes. If a scheduled content delivery includes filters that no longer function (for example, if the referenced field has changed), the scheduled delivery could expose unfiltered data.
The Content Validator searches your LookML for model, Explore, and field names referenced in your Looker content. It’s a super useful tool for validating your LookML references, for fixing errors in your LookML after making changes, and for updating the names of your LookML elements across your project.
You must have the develop
permission to use the Content Validator.
To open the Content Validator, select Content Validator from the Develop menu:
From here, you can use the Content Validator in two ways:
- Click the Validate button to find and fix errors due to changes in your LookML model.
- Click the Find & Replace in All Content button to find and replace the model, Explore, or field names across your Looker content, whether or not there are errors.
Before Using the Content Validator
You must be extremely careful when making changes using content validation. The Content Validator does not have an undo function, and it has the potential to impact many Looks and dashboards, along with their Explore from Here links. If you do make a mistake, you should attempt to correct it as soon as possible, before additional changes make it harder to isolate the affected Looks.
Also, consider your timing when using the Content Validator. When you push development changes to production you should fix affected Looks and tiles at approximately the same time, so that they are broken for the minimum amount of time. If you fix Looks and tiles too early, based on your work in Development Mode, they will appear broken to users until you push your changes. Similarly, if you delay in fixing Looks and tiles until long after pushing the change, they will appear broken to users until you fix them.
If you use the Content Validator to change field names that serve as filters in your dashboards, be sure to review the Filters section of your dashboard schedules and update the filters as needed to reflect your changes. If a scheduled dashboard delivery includes filters that no longer function (for example, if the referenced field has changed), the scheduled delivery will fail and the schedule owner will receive an error message that a filter is invalid.
Use caution when using the Content Validator to make a change to an Explore used in a merged results dashboard tile, as the Content Validator applies any changes to every Explore included in the merged results tile. For example, you cannot use the Content Validator to change the model name in a single Explore in a merged results tile without changing the model name in all of the Explores included in the merged results.
Using the Content Validator to Fix Errors
The Content Validator is a useful tool for finding and fixing errors resulting from changes in your LookML. For example, if you change a field name from customer
to user
, any Looks or dashboard tiles that included the customer
field will no longer function (unless you used the alias
parameter). The Content Validator will list any references to the customer
field, and will also provide an easy way to fix the error.
When you run the Content Validator, it validates all references that your Looks and dashboards make to your LookML models, Explores, views, and fields, and will show an error for any references your content makes to an unknown LookML object.
The Content Validator does not show errors for the following:
- Content for which you do not have
develop
permission. The Content Validator is scoped to your permissions, so that you can only use the Content Validator on the content for which you havedevelop
permission (the models you can see in the Looker IDE). See the Roles documentation page for information on model permissions. - Looks that have been deleted and are in the Trash. If you want to validate a deleted Look, your Looker admin must restore the Look first.
Running the Content Validator
Read the section Before Using the Content Validator before proceeding. The Content Validator can impact many parts of your Looker content, so it’s important to plan and consider beforehand.
To validate your LookML with the Content Validator:
- Click the Validate button.
- Click the Group by tabs to change how errors are grouped:
- Error: List each error, grouping together the Looker content that has the error. This is useful if you want to fix the same error in multiple pieces of content at once.
- Folder: List each folder, grouping together the Looker content that has errors. This is useful if you want to fix only the errors in a particular folder.
- Content: List each piece of content that has errors, grouping together its errors. This is useful if you want to fix all of the errors in a single Look, tile, filter, etc.
- If you are in Development Mode, it’s important to understand the implications of changing Looks and tiles. If you “fix” them before making your breaking changes live, there will be a period of time when the fix is premature, so users will experience broken Looks or tiles. Looker provides a warning message to help you remember this.
- The error table displays any error, along with a list of Looks and tiles that contain the error, plus the underlying model and Explore that are producing the error. You can use the Group by tabs at the upper right of the page to change the layout of the error table.
- Looker provides the Replace and/or Remove buttons for each row, depending on the error type. These buttons provide functionality to fix the errors and are described in more detail below.
Viewing the Content Validation Results
The content validation results show Looker content that uses model names, Explore names, view names, or field names that do not exist or can’t be found. See this section above for details about the LookML elements that the Content Validator can find and validate.
Validation results are based on the LookML available in the mode you’re in when you run the validator. If you are in Production Mode, the validation results will reflect all LookML that has been pushed to production. If you are in Development Mode, the validation results will reflect your saved LookML, even if it hasn’t been pushed to production.
Errors may be due to intentional changes in your LookML, or due to a typo or a missing join.
To adjust errors, you can use the buttons in each line of the error table:
Looker provides the Replace button for each error (see this section below for how to replace names with the Content Validator). For errors with field names in the Data section of a Look or tile, Looker also displays the Remove button (see this section below for how to remove names with the Content Validator).
Depending on your Group by setting, the buttons will apply to a single item (Look or tile), or to multiple items:
- If you group by Error, the buttons apply to all occurrences of that error in all Looks and tiles on your Looker instance — enabling you to fix all occurrences in a single operation.
- If you group by Folder or by Content, the buttons apply to one occurrence of the error in a single Look or tile — enabling you to consider each occurrence of the error separately.
Here’s an example where the table is grouped by Error, so the buttons affect all of the instances of that error:
Here’s an example where the table is grouped by Content, so the buttons affect just the one instance of the error on a single piece of content:
In addition, if you group by Content, you have the additional option to delete Looks.
Replacing Names for Fields, Views, Explores, and Models
The content validation results include a Replace button for the following types of elements so that you can rename them:
- Model names in Looks and dashboard tiles
- Explore names in Looks and dashboard tiles
- View names in custom filters or table calculations
- View names in Looks and dashboard tiles
- Field names:
- In the Data section of a Look
- In a query-based dashboard tile
- Referenced in table calculations
- Referenced in custom filters
- Referenced in custom fields
- Referenced in visualization configurations, such as the Customizations area of the Series tab for column charts
- Referenced in dashboard filters of
type: field
- Referenced by a dashboard tile in order to listen to a dashboard filter (this is configured in the Tiles to Update tab of the dashboard filters screen, described on this page)
For any of these errors, you can use the Content Validator to replace an attribute of the content that is equal to or more general than the attribute that is erroring:
- If a field is erroring, you can replace/remove the field, or you can replace the view, Explore, or model name.
- If an Explore name is erroring, you can replace the Explore name or the model name.
- If a model name is erroring, you can replace the model name.
Here’s how to use the Content Validator to replace the names of elements in your model:
- Click Validate to run the Content Validator.
- Click a Group by setting to choose how errors are grouped. In this example, we’re grouping by Error so we can simultaneously adjust multiple items at once.
- In the error table, click the Replace button next to the error you want to correct.
- In the dialog, select the type of LookML element you want to change: field, view, Explore, or model. Looker displays the appropriate options for each error.
- Verify the name of the item you want to replace. The Content Validator fills this in automatically.
- Enter the new name of the item.
- If you have grouped by Error and there are multiple items that will be affected, you can click Show Content to see a list of items that the Content Validator will update.
- Optionally, you can uncheck any listed items to leave their names unchanged.
- Click the button to make the change.
Removing a Field Name
For errors with field names in the Data section of a Look or tile, the error table will provide a Remove button to the right of the error. You can use the Content Validator to remove fields that are:
- In the Data section of a Look or dashboard tile
- Referenced in visualization configurations, such as the Customizations area of the Series tab for column charts
- Referenced by a dashboard tile in order to listen to a dashboard filter (this is configured in the Tiles to Update tab of the dashboard filters screen, described on this page)
You cannot use the Content Validator to remove fields from custom filters, custom fields, nor table calculations, because typically you need to make additional changes to keep the expression working. Instead, use the content validation error messages to find places where custom filters, custom fields, and table calculation expressions need to be fixed, then rework the expression as appropriate.
- Click Validate to run the Content Validator.
- Click a Group by setting to choose how errors are grouped. In this example, we’re grouping by Error so we can simultaneously adjust multiple items (Looks, tiles, or both).
- In the error table, click the Remove button next to the field error you want to correct.
- Verify the name of the item you want to remove. The Content Validator fills this in automatically.
- If you have grouped by Error and there are multiple items (Looks, tiles, or both), you can click Show Content to see a list of items that the Content Validator will update.
- Optionally, you can uncheck any Looks or tiles to leave their names unchanged.
- Click Remove Field to make the change.
Deleting Looks
If you group the error table by Content, you have the option to delete Looks in the table.
To use the Content Validator to delete Looks:
- Click Validate to run the Content Validator.
- Group the table by Content.
- In the error table, click the Select box next to the Looks you want to delete.
- Click Delete all selected Looks.
- Click OK in the confirmation box to delete the selected Looks.
If you accidentally delete a Look, your Looker admin might be able to recover it for you. Looker stores deleted content in the trash until your Looker admin team empties the trash as part of their maintenance process.
Using the Content Validator to Find and Replace
The Content Validator can also be used to search and replace the names for models, Explores, and fields. For example, you might decide that you prefer one field over another and want to make that replacement in your project, even though there is no error. The complete list of elements that the Content Validator can search/replace is provided in this section above.
Read the section Before Using the Content Validator before proceeding. The Content Validator can impact many parts of your Looker content, so it’s important to plan and consider beforehand.
To use the Content Validator as a find and replace tool:
- Click the Find & Replace in All Content button.
- Select the Type of LookML element you want to change (field, view, Explore, or model).
- Enter the Name of the item you want to replace.
- Enter a Replacement Name for the field, view, Explore, or model.
- Click the Replace button to make the change.
Things to Consider
Plan to rerun content validation to view the results of any fixes you make.
Note the following about the Content Validator:
- For views, Explores, or models, you can change their names, but you cannot remove their name entirely. See this section for a list of items that you can remove with Content Validator.
- Table calculations can only reference fields that are included in the query of a Look or tile. Therefore, if you remove a field from a Look or tile’s query but a table calculation still uses that field, you’ll get a new content validation error.
- Looks that have been deleted and are in the Trash page will not be validated. If you want to validate a deleted Look, your Looker admin must restore the Look first.
- Validation results are based on the LookML available in the mode you’re in when you run the validator. If you are in Production Mode, the validation results will reflect all LookML that has been pushed to production. If you are in Development Mode, the validation results will reflect your saved LookML, even if it hasn’t been pushed to production.
To write powerful LookML, you need to be able to reference existing dimensions, measures, views, or derived tables even if they are not in the current scope. You also need to reference columns in the underlying table and use your database dialect’s function calls to manipulate those values.
Substitution Operator ($)
The substitution operator, $
, makes LookML code more reusable and modular, enabling you to reference other views and derived tables, columns in a SQL table, or LookML dimensions and measures. This is good for two reasons. First, you might have already worked out a really tricky dimension or measure, and you won’t need to write out all the complexity again. Second, if you change something about a dimension or measure, that change can propagate to everything else that relies on it.
There are several ways that you can use the substitution operator:
${TABLE}.column_name
references a column in the table that is connected to the view you’re working on. For example:
${field_name}
references a dimension or measure within the view you’re working on. For example:
${view_name.field_name}
references a dimension or measure from another view. For example:
${view_name.SQL_TABLE_NAME}
references another view or derived table. Note that SQL_TABLE_NAME
in this reference is a literal string; you do not need to replace it with anything. For example:
${view_name.SQL_TABLE_NAME}
does not work with thesql_trigger
parameter used with datagroups.
Scoping and Naming
You can name Explores, views, fields, and sets. These Looker identifiers are written without quotation marks.
LookML fields and sets have full names and short names:
- Full names are of the form
<view>.<field-name | set-name>
. The left side indicates the scope, which is the view that contains the field or set. The right side specifies the particular field or set name. - Short names simply take the form
<field-name | set-name>
, with no separating period. Looker expands short names into full names by using the scope in which they are used.
Below is an example showing many forms of names and scope. This is an unrealistic group of fields, but is shown to demonstrate a variety of possible scoping expressions.
In the dimension: customer_address
declaration above, note that the underlying view for the SQL block (customer
) is different than the enclosing view scope (orders
). This can be useful when you need to compare fields between two different views.
When a view (we’ll call it “view A”) refers to a field defined in a different view (we’ll call it “view B”), there are a few things to keep in mind:
- The view B file must be included in the same model as view A, using the
include
parameter. - View B must be joined to view A in one or more Explores. See our Working with Joins in LookML page to learn about joins.
SQL Dialect
Looker supports many database types, such as MySQL, Postgres, Redshift, BigQuery, and so on. Each database supports a slightly different feature set with differing function names, referred to as the SQL dialect.
LookML is designed to work with all SQL dialects, and LookML does not prefer one dialect over the other. However, you will need to include SQL code expressions (known as SQL blocks) in certain LookML parameters. With these parameters, Looker passes the SQL expression directly to your database, so you must use the SQL dialect that matches your database. For example, if you use a SQL function, it must be a function that your database supports.
SQL Blocks
Some LookML parameters require you to provide raw SQL expressions, so that Looker can understand how to retrieve data from your database.
LookML parameters starting with sql_
expect a SQL expression of some form. Examples are: sql_always_where
, sql_on
, and sql_table_name
. The most common LookML parameter for SQL blocks is sql
, used in dimension and measure field definitions to specify the SQL expression that defines the dimension or measure.
The code you specify in a SQL block can be as simple as a single field name or as complex as a correlated subselect. The content can be quite complex, accommodating almost any need you might have to express custom query logic in raw SQL. Note that the code you use in SQL blocks must match the SQL dialect used by the database.
Example SQL Blocks for Dimensions and Measures
Below are examples of SQL blocks for dimensions and measures. The LookML substitution operator ($) can make these sql
declarations appear deceptively unlike SQL. However, after substitution has occurred, the resulting string is pure SQL, which Looker injects into the SELECT
clause of the query.
As shown in the last two dimensions above, SQL blocks can use functions supported by the underlying database (such as the MySQL functions CONCAT
and DATEDIFF
in this example).
Example SQL Block with a Correlated Subselect
You can place any SQL statement in a field’s SQL block, including a correlated subselect. An example is below:
Example SQL Block for Derived Tables
Derived tables use the SQL block to specify the query that derives the table. An example is below:
LookML Field Type References
When you reference an existing LookML field within another field, you can instruct Looker to treat the referenced field as a specific data type by using a double colon (::
) followed by the desired type. For example, if you reference the orders.created_date
dimension within another field, you can use the syntax ${orders.created_date::date}
to ensure that the created_date
field will be treated as a date field in the SQL that Looker generates, rather than being cast as a string.
The data type you can use in a reference depends on the data type of the original field you are referencing. For example, if you are referencing a string field, the only data type you can specify is ::string
. Here is the full list of allowed field type references you can use for each type of field:
- In a reference to a string field, you can use
::string
. - In a reference to a number field, you can use
::string
and::number
. - In a reference to a date or time field, you can use
::string
,::date
, and::datetime
.
References using::string
and::date
return data in the query time zone, while references using::datetime
return data in the database time zone. - In a reference to a yesno field, you can use
::string
,::number
, and::boolean
.
Field references using the::boolean
type are not available for database dialects that do not support the Boolean data type. - In a reference to a location field, you can use
::latitude
and::longitude
.
Using LookML Field Type References with Date Fields
As an example, suppose you have an enrollment_month
dimension and a graduation_month
dimension, both of which were created within dimension groups of type: time
. In this example, the enrollment_month
dimension is produced by the following dimension group of type: time
:
Similarly, the graduation_month
dimension is created by the following dimension group of type: time
:
Using the enrollment_month
and graduation_month
dimensions, you can calculate how many months or years passed between a student’s enrollment and graduation by creating a dimension group of type: duration
. However, because some date fields are cast as strings in the SQL that Looker generates, setting the enrollment_month
and graduation_month
dimensions as the values for sql_start
and sql_end
can result in an error.
To avoid an error resulting from these time fields being cast as strings, one option is to create a dimension group of type: duration
, referencing the raw
timeframes from the enrollment
and graduation
dimension groups in the sql_start
and sql_end
parameters:
In the Explore UI, this generates a dimension group called Duration Enrolled, with individual dimensions Months Enrolled and Years Enrolled.
A simpler alternative to using the raw
timeframe in a dimension group of type: duration
is to specify the ::date
or ::datetime
reference type for the fields referenced in the sql_start
and sql_end
parameters.
The LookML in this example also creates a Duration Enrolled dimension group, but using the ::date
reference allows the enrollment_month
and graduation_month
dimensions to be used without using a raw
timeframe or casting them as strings with SQL.
For an additional example of how LookML field type references can be used to create custom dimension groups of type: duration
, see the dimension_group
parameter documentation page.
This syntax is not available with measures of
type: list
, which cannot be referenced as of Looker 6.8.
LookML Constants
The constant
parameter allows you to specify a constant you can use throughout a LookML project. With LookML constants, you can define a value once and reference it in any part of your project where strings are accepted, thus reducing repetition in your LookML code.
Constants must be declared within a project manifest file, and the value for a constant must be a string. For example, you can define a constant city
with the value "Okayama"
as follows:
The city
constant can then be referenced throughout your project using the syntax @{city}
. For example, you can use the city
constant with the label
parameter in the users
Explore:
Looker then displays Okayama Users in both the Explore menu and in the title of the Explore, rather than the default Users.
For more information and examples of how you can use LookML constants to write reusable code, see the constant
parameter documentation page.
This page covers more common patterns in LookML.
Labeling Fields (and Names in the UI)
Looker converts LookML field names into the strings displayed in the UI by combining the view name in regular-weight font with the field’s short name in bold. For example, a field called Amount in the Orders view would appear in the UI as Orders Amount. On this page, they are both bolded and the view name is capitalized (ORDERS Amount) to make the discussion clearer.
If you would like a field to be named differently than its column name in a table, simply change the field name and declare its sql:
linkage. In the example below, there is a table airports
with a column cntrl_twr
. Looker would generate the following declaration:
We will rename the cntrl_twr
dimension to be human-readable.
Filtering Counts by a Dimension
It is pretty easy to group by a dimension and count entities — grouping by USERS Country, ORDERS Count will tell you where your orders are coming from by country. However, it is often useful to build a count filtered by some dimensional value. For example, you could make a new measure ORDERS France Count:
Filters can use any expression. If you wanted a field that counted users from the EU, you could use something like this:
If you want to filter with a mathematical expression, be sure to enclose it in double quotes:
Percentages
Many key performance indicators are expressed in the form of percentages, such as “the percent of items returned,” “the percent of emails that resulted in a sale,” or other instances of “the percent of X that Y.” In LookML, the design pattern is to create counts for the two conditions and create a third field that computes the percentage between the two.
Use the form below when computing percentages. In Postgres, counts are integers, and division between integers results in integers. Multiplying by 100.0 converts the first count to a floating point number, thus converting the rest of the expression to a float. In order to avoid divide-by-zero errors, the NULLIF(value, 0)
will convert a zero value to null, making the result null and avoiding an error.
Using Sets for Drill-Down Details
One of the most powerful features of Looker is the ability to drill into data to see the underlying entities that make up a count or other measure.
When a measure is clicked on the Looker UI, a new query is created localizing the set of data making up the measure. Each value for each dimension on the row in the table gets added to the current filters.
In order to show the detail, Looker needs a specified list of drill fields to show when the measure’s value has been clicked. When you generate a model, the generator typically creates some initial drill fields for you. In addition, you can add drill fields yourself. For example, assume that we are measuring ORDERS Count by USERS State in the last week. In Looker, the query would look something like this:
USERS State | ORDERS Count |
---|---|
California | 24 |
Texas | 5 |
Colorado | 4 |
Florida | 4 |
Illinois | 4 |
If we click on 24 in the California row, the expected behavior is that we’d see the 24 orders that came from California.
Looker handles adding the filter USERS State: California, but Looker doesn’t know which fields you want to show in the order. You’ll need to use a set to declare those fields in your model.
In LookML, a set is a list of field (dimension, measure, and filter) names. Sets are used to tell Looker which fields:
- You want to show when drilling into a count or other measure
- To import when joining a view
- Are indexed in an Explore
The same set can be used in many places in a model, so Looker provides several ways of creating sets.
Literal Sets
The simplest form of a set is a literal set. A literal set is created by simply declaring the set as an array. You can declare literal sets by using ‘[]’.
Given this example:
The fields we’d like to show are id
, name
, and city
.
In the measure, we could simply declare a literal array.
For sets that are only used once, declaring them literally is simple and easily understood.
Named Sets
Suppose we have two counts: CUSTOMERS Count and CUSTOMERS In California Count. When we drill into either of these counts, we’d like to show the fields id
, name
, and city
. If we declare the fields literally, we could:
If we wanted to add a new field (say the field customers.state
), we’d have to edit both lists. Instead, LookML provides a way to create named sets that we can maintain in one place and use in multiple places.
The code below creates a set customers.detail
and points both counts to the same set of fields.
LookML sets are pretty powerful:
- Redeclaration of sets is additive — if you declare a set in multiple places, then Looker includes all of the fields declared for the set in all locations.
- You can embed sets in other sets by typing the other set name, followed by an asterisk, like
setname*
. - You can even remove elements from sets by placing a hyphen before the field name, like
-fieldname
.
Customizing Drill Visualizations
If your Looker admin has enabled the Visual Drilling Labs feature, drill visualizations will not always default to a data table. In this case, you can customize the visualization displayed by using Liquid variables in the link
parameter, as shown on this documentation page and this Help Center article.
Filtering Result Sets
LookML provides a set of filter operations that can be applied to fields and Explores to filter result sets before they are returned to the user.
always_filter
on the Explore
Use always_filter
to always apply a set of filters to any query run within an Explore. The filters will appear in the Looker UI, and though users can change the default filter value that you provide, they cannot remove the filters. Generally, these filters are used to remove data that you normally don’t want to included. For example, suppose in the Orders Explore we only wanted to see orders that were complete or pending. We could add:
If the user wanted to see orders with other status values, they could set ORDERS Status to % in the UI.
sql_always_where
on the Explore
If you want to apply a query restriction that users cannot change, you can use sql_always_where
. In addition to queries run by human users, the restriction will apply to dashboards, scheduled Looks, and embedded information that relies on that Explore. A sql_always_where
condition is not displayed to the user, unless they look at the underlying SQL of any queries that they create.
The following example prevents users from looking at orders before 2012-01-01:
conditionally_filter
on the Explore
Very large tables require some thought when querying, since unlimited queries can quickly become too burdensome on the database. LookML provides a way to address this in the form of conditionally_filter
.
You use the conditionally_filter
parameter to apply a filter to the query unless the user has already added a filter for one of the fields listed in the unless
section.
The following example won’t make any change to the user’s query if the user applied a filter on one or more of these fields: created_date
, shipped_time
, shipped_date
, orders.id
, or customer.name
. If the user didn’t filter on any of those fields, Looker will automatically add a filter of 1 day on orders.created_time
.
Joins enable you to connect different views so that you can explore data from more than one view at the same time and see how different parts of your data relate to each other.
For example, your database might include tables like order_items
, orders
, and users
. We can use joins to explore data from all these tables at the same time. This page will walk through joining in LookML, including specific join parameters and joining patterns.
Joins Start with an Explore
Joins are defined in the model file to establish the relationship between an Explore and a view. Joins connect one or more views to a single Explore, either directly, or through another joined view.
Let’s consider two database tables: order_items
and orders
. Once you have generated views for both of these tables, you can declare one or more of them under the explore
parameter in the model file, like this:
When we run a query from the order_items
Explore, order_items
will appear in the FROM
clause of the generated SQL, like this:
We can also join additional information to our order_items
Explore. For example, suppose we wanted to add data about the order
that the order_item
is a part of. You might do something like this:
The LookML above accomplishes two things. First, you will be able to see fields from both orders
and order_items
in the UI, like so:
Second, the LookML describes how to join orders
and order_items
together. That LookML would translate to the following SQL:
Below, we’ll step through these LookML parameters one by one to explain them in greater detail. You can also read the join
parameter reference page to learn more about how this LookML is translated into SQL.
Chat Team Tip: The validation error we are asked about most is “Unknown or inaccessible field,” which can be caused by a missing join. Check out the Help Center article about this error for more information.
Join Parameters
There are four main parameters that are used to join: joins
, join
, type
, relationship
, and sql_on
. Let’s walk through each of them from the example above.
Step 1: Starting the Explore
First, create the order_items
Explore, like so:
Step 2: join
If you want to join a table, you must first declare it in a view. In this example, orders
is an existing view in our model.
We can then use the join
parameter to declare that we want to join the orders
view to order_items
:
Step 3: type
We can next consider which type
of join to perform. Looker supports LEFT JOIN
, INNER JOIN
, FULL OUTER JOIN
, and CROSS JOIN
. These correspond to the type
parameter values of left_outer
, inner
, full_outer
, and cross
.
The default value of type
is left_outer
, which will generally be your most popular join type.
Step 4: relationship
We can then define a join relationship
between order_items
and orders
. Properly declaring the relationship
of a join is important for Looker to calculate accurate measures. The relationship is defined from the order_items
Explore to the orders
view. The possible options are one_to_one
, many_to_one
, one_to_many
, and many_to_many
.
In our example, there can be many order_items
for a single order
, so the relationship from order_items
to orders
is many_to_one
:
If you do not include a relationship
in your join, Looker will default to many_to_one
.
Step 5: sql_on
The final step is declaring how to join these two tables together. This can be done with either the sql_on
or the foreign_key
parameter. We usually suggest sql_on
since it can do everything foreign_key
can do, but is typically easier to understand.
sql_on
is equivalent to the ON
clause in the generated SQL for a query. With this parameter, we can declare which fields should be matched up to perform the join:
We can also write more complex joins. For example, you may want to join only orders with id
greater than 1000:
Check out substitution operators to learn more about the ${ ... }
syntax in these examples.
Step 6: Testing
We’ve now created a new join in LookML! You can test that this join is functioning as expected by going to the Order Items Explore. You should see fields from both order_items
and orders
.
Check out Model Development to learn more about testing LookML changes.
Joining Through Another View
You may want to join a view to an Explore through another view. In the example above, we joined orders
to order_items
via the order_id
field. We might also want to join the data from a view called users
to the order_items
Explore, even though they don’t share a common field. This can be done by joining through the orders
view.
We can use sql_on
or foreign_key
to join users
to orders
instead of order_items
. We do this by correctly scoping the field from orders
as orders.user_id
.
Here is an example using sql_on
:
Joining a View More Than Once
Let’s say we have a users
view that contains data for both buyers and sellers. We may want to join data from this view into order_items
, but do so separately for buyers and sellers. We can join users
twice, with different names, using the from
parameter.
The from
parameter allows you to specify which view to use in a join, while giving the join a unique name. For example:
In this case, only buyer data will be joined in as buyers
, while only seller data will be joined in as sellers
.
Note: the users
view must now be referred to by its aliased names buyers
and sellers
in the join.
Limiting Fields from a Join
The fields
parameter can be used to specify which fields are brought from a join into an Explore. By default, all fields from a view will be brought in when joined. However, sometimes you might want to bring through only a subset of fields.
For example, when orders
is joined to order_items
, we may want to bring only the shipping
and tax
fields through the join, like so:
You can also reference a set of fields, such as [set_a*]
. Each set is defined within a view using the set
parameter. Suppose we have the following set defined in the orders
view:
We can then choose to bring only these three fields through when we join orders
to order_items
, like so:
Symmetric Aggregates
Looker uses a feature called “symmetric aggregates” to calculate aggregations (like sums and averages) correctly, even when joins result in a fanout. Symmetric aggregates are described in more detail in this Community topic, and the fanout problem they solve is explained in this blog post on Aggregate Functions Gone Bad.
Primary Keys Required
In order to have measures (aggregations) come through joins, you must define primary keys in all of the views involved in the join.
You can do this by adding the primary_key
parameter to the primary key field definition in each view:
To correctly handle joined measures, Looker relies on you specifying a primary key where the values are completely unique, non-NULL values. If your data does not contain a primary key, consider whether the concatenation of several fields would result in a primary key of completely unique, non-NULL values. If your primary key is not unique or contains NULL values and your query includes data that reveal those issues, then Looker returns an error as described in this Help Center article.
Supported SQL Dialects
Looker’s ability to provide symmetric aggregates depends on whether the database dialect supports them. The following list shows which dialects support symmetric aggregates in the most recent Looker release:
If your dialect does not support symmetric aggregates, you will need to be careful when executing joins in Looker, as some types of joins can result in inaccurate aggregations (like sums and averages). This problem, and the workarounds for it, are described in great detail in this blog post on Aggregate Functions Gone Bad.
Apache Druid Does Not Support Joins
The Apache Druid database dialect does not support SQL joins. Using the join
parameter with Apache Druid will result in an error.
Learn More About Joins
This has been a quick tutorial about joining in Looker. To learn more about join parameters in LookML, check out the Join Reference documentation.
Derived tables are important tools in Looker that enable you to expand the sophistication of your analyses. In some cases, they can also play a valuable role in enhancing query performance.
At a high level, Looker’s derived table functionality provides a way to create new tables that don’t already exist in your database.
You define a derived table in one of these ways:
Using LookML to define a native derived table (NDT)
These derived tables are defined in LookML, referring to dimensions and measures in your model. Compared to SQL-based derived tables, they are much easier to read, understand, and reason about as you model your data.
Turning a SQL query into a derived table definition
These derived tables are defined in SQL, referring to tables and columns in your database. You can use SQL Runner to create the SQL query and turn it into a derived table definition. You cannot refer to LookML dimensions and measures in a SQL-based derived table.
You can then base a LookML view on the derived table, just like any other table.
Simple Example
Let’s consider an example to clarify the concept. Suppose your database already contains a table called order
, and you’d like to summarize some of that order data by customer. We can create a new derived table named customer_order_facts
to do this:
Here’s the LookML to create the customer_order_facts
derived table as an NDT and as a SQL-based derived table:
There are some things to note:
- We’ve used the
derived_table
parameter to base the view on a derived table. - Other than a few derived table parameters (
derived_table
and eitherexplore_source
orsql
) at the top of the view, this view works just like any other. - The column names of your result set will be the names you need to reference in your dimensions, such as
${TABLE}.first_order_date
. For this reason, if you are defining a SQL-based derived table, you should make sure to alias each column with a clean name by usingAS
. This is why we wroteMIN(DATE(time)) AS first_order_date
instead of simplyMIN(DATE(time))
. - Some dialects, such as Google Cloud SQL, IBM Netezza, and MemSQL, do not support a SQL
CREATE TABLE AS SELECT
statement in a single step. In these dialects you cannot create PDTs as shown above. However, you can use thecreate_process
parameter to create PDTs in multiple steps. - If you need to create a PDT for a dialect that uses a custom DDL, such as Google’s predictive BigQuery ML, you can use the
sql_create
parameter.
Temporary and Persistent Derived Tables
Derived tables can be temporary (ephemeral), or they can be stored in your database (persistent).
Regular or “Ephemeral” Derived Table
A regular derived table — sometimes called an ephemeral derived table — is temporary and not written to your database. When a user runs an Explore query involving one or more derived tables, Looker constructs a SQL query using a dialect-specific combination of the SQL for the derived table(s) plus the requested fields, joins, and filter values. If the combination has been run before and the results are still valid in cache, Looker uses those cached results.
Because an ephemeral derived table runs a new query on your database every time a user requests data from it, you’ll want to be sure that the derived table is performant and won’t put excessive strain on your database. In cases where the query will take some time to run, a persistent derived table is often a better option.
Supported Database Dialects for Derived Tables
Looker’s ability to provide derived tables depends on whether the database dialect supports them. The following list shows which dialects support derived tables in the most recent Looker release:
Persistent Derived Table
A persistent derived table (PDT) is written into a scratch schema on your database and regenerated on a schedule of your choosing. In most cases, when the user requests data from the table it has already been created, reducing query time and database load.
To use this feature you need to create a scratch schema on your database, and your database needs to support writes. The connection that connects Looker to your database must also have the Persistent Derived Tables checkbox enabled. Most companies set this up when they initially configure Looker (see the Looker Dialects documentation page for instructions for your database dialect), and of course you can also do so after your initial setup.
There are some read-only database configurations that don’t allow persistence to work (most commonly Postgres hot-swap replica databases). In these cases, you can use ephemeral derived tables instead.
Supported Database Dialects for PDTs
Looker’s ability to provide PDTs depends on whether the database dialect supports them. The following list shows which dialects support PDTs in the most recent Looker release:
PDTs are not supported for Snowflake or Google BigQuery connections that use OAuth.
Adding Persistence
If your database allows writes and you have created a scratch schema (see the Looker Dialects documentation page for instructions for your database dialect), you can store derived tables in your database to increase query speed and reduce database load. Many users who can use persistence always use persistence, as typically the additional storage space required is not an issue.
There are some situations where you should avoid persistence. Persistence should not be added to derived tables that will be extended, because each extension of a PDT will create a new copy of the table in your database. Also, persistence cannot be added to derived tables that make use of templated filters or Liquid parameters. There are potentially an infinite number of possible user inputs with those features, so the number of persistent tables in your database could become unmanageable.
Making a Derived Table Persistent
There is no parameter that means “make this derived table persistent.” Rather, persistence is created by the addition of the datagroup_trigger
, sql_trigger_value
, or persist_for
parameter.
Datagroups
If you have defined a datagroup
for your caching policies, you can use the datagroup_trigger
parameter to cause the persistent derived table (PDT) to rebuild on the same schedule as your caching policies. This means that users will not have to wait for the PDT to be created.
Datagroups are the most flexible method of creating persistence. You can define a datagroup to trigger a PDT rebuild at the time of an ETL rebuild, for example, or you can trigger the datagroup separately from your database ETL cycle.
See the Caching Queries and Rebuilding PDTs with Datagroups documentation page for an overview of caching and rebuilding PDTs in Looker.
sql_trigger_value
You can also specify a schedule on which the PDT will rebuild. This means that users will not have to wait for the PDT to be created. If the user happens to request data from the PDT while it is being rebuilt, data will be returned from the existing table until the new table is ready.
You achieve this kind of persistence by using the sql_trigger_value
parameter.
By default, Looker checks to see if it should regenerate a table every 5 minutes. However, you can change this schedule as desired by using the PDT And Datagroup Maintenance Schedule setting in Looker’s admin settings.
persist_for
Another option is to set the length of time the derived table should be stored before it is dropped. If a user queries the table before that length of time has been reached, they’ll receive data from the existing table. Otherwise, it will be rebuilt the next time a user requests data from it, meaning that particular user will need to wait for the rebuild. You achieve this kind of persistence by using the persist_for
parameter.
Indexing PDTs
Because PDTs are actually stored in your database, you should specify indexes using the indexes
parameter for SQL dialects that support them. Or, if you’re using Redshift, you can specify regular sort keys (using sortkeys
), interleaved sort keys (using indexes
), and distribution keys (using distribution
).
To add persistence to our original derived table example, we could set it to rebuild when the order_datagroup
datagroup triggers, and add indexes on both customer_id
and first_order_date
, like this:
If you don’t add an index (or Redshift equivalent), Looker will warn you that you should do so to improve query performance.
Generating PDTs
If you are going to use persistence, be mindful of the underlying query’s duration and trigger time, because they will have an impact on how PDTs are generated. These are some things to keep in mind:
If a user queries a PDT that has not yet been built, or queries a PDT that uses
persist_for
persistence and the PDT build has expired, the new PDT build must complete before the query can be completed.All PDTs triggered by the same
datagroup
will rebuild during the same regeneration process.Depending on the value set in the Max PDT Builder Connections field in a connection’s settings, one or more PDTs can be built at the same time on any given connection. See the next bullet for exceptions.
PDTs persisted through the
persist_for
parameter, PDTs in Development Mode, and PDTs rebuilt via the Rebuild Derived Tables & Run option are built consecutively, not in parallel.If you have two PDTs (we’ll call them
PDT_1
andPDT_2
), and you makePDT_2
reliant onPDT_1
, thePDT_1
table will need to finish generating before Looker can start generatingPDT_2
. Making one PDT reliant on another is discussed in the Referencing Derived Tables in Other Derived Tables section later on this page.
The takeaway is that long-running PDTs can delay the builds of other PDTs. They can also slow down other user queries against a PDT while the database is working hard to generate the large PDT.
Persistence in Development Mode
When you add persistence to a derived table, it will exhibit some specific behaviors in Development Mode that you should know about.
Derived Table Behavior in Development Mode
If you query a PDT in Development Mode without making any changes to its definition, Looker will query the production version of that PDT. However, as soon as you make any change to the PDT definition and query it, a new development version of the PDT is created. This enables you to test changes without disturbing end users.
If a development version of a PDT is created, it will always be persisted for a maximum of 24 hours, regardless of the persistence method you’ve used. This ensures that Development Mode tables are cleaned up on a frequent basis and don’t clutter your database.
After you push changes to production, Looker will immediately begin treating your development table as the production table (as long as you haven’t used the conditional SQL described below). This means that users will not have to wait for a new version of the production table to be built.
Table Sharing and Cleanup
Within any given Looker instance, Looker will share PDTs between users if the PDTs have the same definition and have the same persistence method setting. Additionally, if a PDT’s definition ever ceases to exist, Looker drops the table.
This has several benefits:
- If you haven’t made any changes to a PDT in Development Mode, your queries will use the existing production tables.
- If two developers happen to make the same change to a PDT while in Development Mode, they will share the same development table.
- Once you push your changes from Development Mode to Production Mode, the old production definition does not exist anymore, so the old production table is dropped.
- If you decide to throw away your Development Mode changes, that PDT definition does not exist anymore, so the unneeded development tables are dropped.
Working Faster in Development Mode
Finally, there are cases when the PDT you’re creating may take a long time to generate, which can be frustrating when testing lots of changes in Development Mode. For SQL-based derived tables, Looker supports a conditional WHERE
clause for Development Mode that helps manage this:
In this example, the query will include all data from 2000 onward when in production, but only the data from 2015 onward when in Development Mode. Using this feature strategically to limit your result set, and increase query speed, can make Development Mode changes much easier to validate.
If you use this feature, please keep in mind that tables will never be shared between Production Mode and Development Mode. This will use more storage on your database, and it means that the table in Production Mode will need to rebuild after you commit your changes to production.
Native derived tables do not support a conditional WHERE
clause, but you can create filters in your NDTs, or limit the NDT rows to help test and develop your NDTs.
Users Can Override Persistence
Some users can override the persistence settings you create if they need to get the most up-to-date data. These users can force every PDT referenced in the query and all PDTs that depend upon those PDTs to rebuild by clicking the query’s gear menu and choosing Rebuild Derived Tables and Run. A user with at least one role including the develop
permission has the Rebuild Derived Tables and Run choice available for any query using a PDT that is already built:
This feature enables a user to query up-to-date data when they need it. While that specific user is waiting for the tables to rebuild, other users’ queries will still use the existing tables. After the PDTs are finished rebuilding, then all users will use the rebuilt PDTs. Although this process is designed not to interrupt other users’ queries while the tables are rebuilding, those users could still be affected by the additional load on your database. If you are in a situation where triggering a PDT rebuild during business hours could put an unacceptable strain on your database, you may need to communicate to your users that they should never rebuild certain PDTs.
Referencing Derived Tables in Other Derived Tables
It is possible to reference one derived table in the definition of another, creating something called “cascading” derived tables. To use them, you’ll need to use the syntax ${derived_table_or_view_name.SQL_TABLE_NAME}
. For example, you can reference the clean_events
derived table by typing ${clean_events.SQL_TABLE_NAME}
. You can also use the ${derived_table_or_view_name.SQL_TABLE_NAME}
syntax to refer to a LookML view.
When referring to another derived table or view name, type
SQL_TABLE_NAME
as a literal string.
Although datagroups are the recommended way to trigger PDT rebuilds, the ${derived_table_or_view_name.SQL_TABLE_NAME}
syntax is not supported with a datagroup’s sql_trigger
parameter. However, if you are using the sql_trigger_value
parameter to trigger your PDT rebuilds, you can use the ${derived_table_or_view_name.SQL_TABLE_NAME}
syntax with sql_trigger_value
.
Although it’s not always required, when referring to a derived table in this manner, it’s often useful to alias the table. For example:
This is because, behind the scenes, persistent derived tables are named with lengthy codes in your database. In some cases (especially with ON
clauses) it’s easy to forget that you need to use the ${derived_table_or_view_name.SQL_TABLE_NAME}
syntax to retrieve this lengthy name. An alias can help to prevent this type of mistake.
In this example we’re creating a derived table from events
that cleans out unwanted rows, and then creating a summary of that derived table. The event_summary
table regenerates whenever new rows are added to clean_events
.
This particular example could be done more efficiently in a single derived table, but it’s useful for demonstrating derived table references.
Monitoring and Troubleshooting Persistent Derived Tables
Companies that use many PDTs — especially if they are cascading — often want to understand the status of those PDTs. Looker offers several tools, described on the Admin Settings - PDTs documentation page, to investigate these tables.
When attempting to troubleshoot persistent derived tables:
- Pay special attention to the distinction between development tables and production tables when investigating the derived table log.
- Verify that no changes have been made to the scratch schema where Looker stores persistent derived tables. If changes have been made, you may need to update the Connection settings in the Admin section of Looker, and then possibly restart Looker to restore normal PDT functionality.
- Determine if there are problems with all PDTs, or just one. If there is a problem with one, then the issue is likely due to a LookML or SQL error.
- Determine if problems with the PDT correspond with the times when it is scheduled to rebuild.
- Make sure that all
sql_trigger_value
queries evaluate successfully, and return only one row and column. For SQL-based PDTs, you can do this by running them in SQL Runner. (Applying aLIMIT
protects from runaway queries.) For more information on using SQL Runner to debug derived tables, see this Community topic. - For SQL-based PDTs, use SQL Runner to verify that the SQL of the PDT executes without error. (Be sure to apply a
LIMIT
in SQL Runner to keep query times reasonable.) - For SQL-based DTs and PDTs, avoid using common table expressions (CTEs). Using CTEs with DTs creates nested
WITH
statements that can cause PDTs to fail without warning. Instead, use the SQL for your CTE to create a secondary DT and reference that DT from your first DT using the${derived_table_or_view_name.SQL_TABLE_NAME}
syntax. - Check that any tables on which the problem PDT depends — whether normal tables or PDTs themselves — exist and can be queried.
- Ensure that any tables on which the problem PDT depends do not have any shared or exclusive locks. For Looker to successfully build a PDT, it needs to acquire an exclusive lock on the table to be updated. This will conflict with other shared or exclusive locks currently on the table. Looker will be unable to update the PDT until all other locks have cleared. The same is true for any exclusive locks on the table Looker is building a PDT from; if there is an exclusive lock on a table, Looker won’t be able to acquire a shared lock to run queries until the exclusive lock clears.
- Use the Show Processes button in SQL Runner. If there are a large number of processes active, this could slow down query times.
Monitor comments in the query. Database administrators can easily differentiate normal queries from those that generate PDTs. Looker adds a comment to the
CREATE TABLE ... AS SELECT ...
statement that includes the PDT’s LookML model and view, plus a unique identifier (slug) for the Looker instance. If the PDT is being generated on behalf of a user in Development Mode, the comment will indicate the user’s ID. This comment can be seen in a database console, as shown in the following example of a PDT being generated from Development Mode:The PDT generation comment can also be seen in an Explore’s SQL tab if Looker had to generate a PDT for the Explore’s query. The comment appears at the top of the SQL statement, as shown in the following example:
Finally, the PDT generation comment appears in the details of the Queries admin page, as shown in the following example:
A derived table is a query whose results are used as if it were a physical table in the database. Native derived tables (NDTs) perform the same function as writing a SQL query, but are defined in LookML. They are also much easier to read, understand, and reason about as you model your data.
Both native and SQL-based derived tables are defined in LookML using the derived_table
parameter at the view level. However, with NDTs, you do not need to create a SQL query. Instead, you use the explore_source
parameter to specify the Explore on which to base the derived table, the desired columns, and other desired characteristics.
You can also have Looker create the derived table LookML from a SQL Runner query, as described on the Using SQL Runner to Create Derived Tables documentation page.
Using an Explore to Begin Defining Your NDTs
Starting with an Explore, Looker can generate LookML for all or most of your derived table. Just create an Explore and select all of the fields you want to include in your derived table. Then, to generate the NDT LookML:
Click the Explore’s gear menu.
Select Get LookML.
Click the Derived Table tab.
Looker displays the LookML to create the corresponding NDT.
Copy the LookML.
Now that you have copied the generated LookML, paste it into a view file:
Navigate to your project files.
Click the + at the top of the project file list in the Looker IDE and select Create View. Or you can click a folder’s menu and select Create View from the menu to create the file inside the folder.
Set the view name to something meaningful.
Optionally, change column names, specify derived columns, and add filters.
When you use a measure of
type: count
in an Explore, the visualization labels the resulting values with the view name rather than the word Count. To avoid confusion, we recommend pluralizing your view name, selecting Show Full Field Name under Series in the visualization settings, or using aview_label
with a pluralized version of your view name.
Defining an NDT in LookML
Whether you use derived tables declared in SQL or native LookML, the output of a derived_table
’s query is a table with a set of columns. When the derived table is expressed in SQL, the output column names are implied by the SQL query. For example, the SQL query below will have the output columns user_id
, lifetime_number_of_orders
, and lifetime_customer_value
:
In Looker, a query is based on an Explore, includes measure and dimension fields, adds any applicable filters, and may also specify a sort order. An NDT contains all these elements plus the output names for the columns.
The simple example below produces a derived table with three columns: user_id
, lifetime_customer_value
, and lifetime_number_of_orders
. You don’t need to manually write the query in SQL — instead, Looker creates the query for you by using the specified Explore order_items
and some of that Explore’s fields (order_items.user_id
, order_items.total_revenue
, and order_items.order_count
).
Using include
Statements to Enable Referencing Fields
In the NDT’s view file you use the explore_source
parameter to point to an Explore and to define the desired columns and other desired characteristics for the NDT. Because you are pointing to an Explore from within the NDT’s view file, you must also include the file containing the Explore’s definition. Explores are usually defined within a model file, but in the case of NDTs it’s cleaner to create a separate file for the Explore using the .explore.lkml
file extension, as described in the documentation for Creating Explore Files. That way, in your NDT view file you can include a single Explore file and not the entire model file. In which case:
- The NDT’s view file should include the Explore’s file. For example:
include: "/explores/order_items.explore.lkml"
- The Explore’s file should include the view files that it needs. For example:
include: "/views/order_items.view.lkml"
include: "/views/users.view.lkml"
- The model should include the Explore’s file. For example:
include: "/explores/order_items.explore.lkml"
Explore files will listen to the connection of the model they are included in. Consider this fact when you include Explore files in models that are configured with a connection that is different from the Explore file’s parent model. If the schema for the including model’s connection differs from the schema for the parent model’s connection, it can cause query errors.
Defining NDT Columns
As shown in the example above, you use column
to specify the output columns of the derived table.
Specifying the Column Names
For the user_id
column, the column name matches the name of the specified field in the original Explore.
Frequently, you will want a different column name in the output table than the name of the fields in the original Explore. In the example above, we are producing a lifetime value calculation by user using the order_items
Explore. In the output table, total_revenue
is really a customer’s lifetime_customer_value
.
The column
declaration supports declaring an output name that is different from the input field. For example, the code below says, “make an output column named lifetime_value
from field order_items.total_revenue
”:
Implied Column Names
If the field
parameter is left out of a column declaration, it is assumed to be <explore_name>.<field_name>
. For example, if you have specified explore_source: order_items
, then
is equivalent to
Creating Derived Columns for Calculated Values
You can add derived_column
parameters to specify columns that don’t exist in the explore_source
parameter’s Explore. Each derived_column
parameter has a sql
parameter specifying how to construct the value.
Your sql
calculation can use any columns that you have specified using column
parameters. Derived columns cannot include aggregate functions, but they can include calculations that can be performed on a single row of the table.
The example below produces the same derived table as the earlier example, except that it adds a calculated average_customer_order
column, which is calculated from the lifetime_customer_value
and lifetime_number_of_orders
columns in the NDT.
Using SQL Window Functions
Some database dialects support window functions, especially to create sequence numbers, primary keys, running and cumulative totals, and other useful multi-row calculations. After the primary query has been executed, any derived_column
declarations are executed in a separate pass.
If your database dialect supports window functions, then you can use them in your native derived table. Create a derived_column
parameter with a sql
parameter that contain the desired window function. When referring to values, you should use the column name as defined in your NDT.
The example below creates an NDT that includes the user_id
, order_id
, and created_time
columns. Then, using a derived column with a SQL ROW_NUMBER()
window function, it calculates a column that contains the sequence number of a customer’s order.
Adding Filters to an NDT
Suppose we wanted to build a derived table of a customer’s value over the past 90 days. We want the same calculations as we performed above, but we only want to include purchases from the last 90 days.
We just add a filter to the derived_table
that filters for transactions in the last 90 days. The filters
parameter for a derived table uses the same syntax as you use to create a filtered measure.
Filters will be added to the WHERE
clause when Looker writes the SQL for the derived table.
Using Templated Filters
You can use bind_filters
to include templated filters:
This is essentially the same as using the following code in a sql
block:
The to_field
is the field to which the filter is applied. The to_field
must be a field from the underlying source_explore
.
The from_field
specifies the field from which to get the filter, if there is a filter at runtime.
In the bind_filters
example above, Looker will take any filter applied to the filtered_lookml_dt.filter_date
field and apply the filter to the users.created_date
field.
You can also use the bind_all_filters
subparameter of explore_source
to pass all runtime filters from an Explore to an NDT subquery. See the explore_source
documentation page for more information.
Sorting and Limiting NDTs
You can also sort and limit the derived tables, if desired:
Remember, an Explore may display the rows in a different order than the underlying sort.
Converting NDTs to Different Time Zones
You can specify the time zone for your NDT using the timezone
subparameter:
When you use the timezone
subparameter, all time-based data in the NDT will be converted to the time zone you specify. See the timezone
values documentation page for a list of the supported time zones.
If you don’t specify a time zone in your NDT definition, the NDT will not perform any time zone conversion on time-based data, and instead time-based data will default to your database time zone.
If the NDT is not persistent, you can set the time zone value to "query_timezone"
to automatically use the time zone of the currently running query.
You can use a number of LookML parameters to make your Explores more user-friendly by changing how fields appear in the user interface without altering your underlying LookML. This page provides an overview of LookML parameters that modify the appearance of the Looker Explore menu and field picker.
In Looker, your users can see Explores listed in the Explore menu: | Within an Explore, your users can use the field picker to select the fields (dimensions and measures) they want to use to query their data: |
This page lists each commonly used parameter with a link to its full reference page, a short description of its function, and an image of what it does.
For tips and suggestions about making an Explore user-friendly, see the article Best Practice: Create a Positive Experience for Looker Users in Looker’s Help Center.
Explore Name and Menu
This section describes the default behavior for an Explore’s name and appearance on the Explore menu, then describes the LookML parameters that let you change that behavior:
Explore Menu Default Behavior
Model Names
By default, the Explore menu is organized by model names, which are determined by the name of each model file. The Explore menu shows a formatted version of the model name, with underscores changed to spaces and each word capitalized. Under each model name is a list of the Explores defined in that model file.
Explore Names
The name of each Explore in the menu is based on the corresponding explore
parameter in the model file. As with the model names, Explore names are formatted in the menu so that underscores are changed to spaces and each word is capitalized. The Explore name is also shown as the Explore title above the field picker:
For more information about model files, see Understanding Model and View Files. For more information about defining Explores, see the explore
parameter documentation page.
label
(Model)
A model’s label
parameter renames a model in the Explore menu without changing how it’s referenced in LookML. Explore URLs, the Looker IDE, and the SQL Runner still reflect the actual model name, and the way the model should be referenced in LookML and Admin settings remains unchanged.
If the label
parameter is not used, the model name will be displayed according to its default behavior.
label
(Explore)
An Explore’s label
parameter renames an Explore in the Explore menu and on the Explore page without changing how it’s referenced in LookML.
If the label
parameter is not used, the Explore name will be displayed according to its default behavior.
group_label
(Explore)
An Explore’s group_label
parameter changes the default organization of the Explore menu. Instead of listing each Explore under its model’s name, group_label
allows you to define a custom heading under which to list one or more Explores.
description
(Explore)
An Explore’s description
parameter lets you add a description of an Explore to the UI, so users can get additional information while creating reports.
Users can see the description by hovering over the information icon, next to the Explore name at the top of the field picker. The information icon also appears in the drop-down list of Explores. If you do not explicitly add a description to an Explore, no description will be shown.
hidden
(Explore)
An Explore’s hidden
parameter hides the Explore from the Explore menu. By default, hidden
is off and the Explore will be displayed. hidden
does not hide LookML or prevent access to an Explore via the URL. hidden
is not meant as a security feature, but rather as a presentation feature.
Field Picker
This section describes the default behavior of the field picker, then describes the LookML parameters that let you change that behavior.
Change how view names appear in the field picker with:
Change how names of individual fields appear in the field picker with:
Change how fields are organized within the field picker with:
Field Picker Default Behavior
By default, the field picker is organized by headings that correspond to the views specified by the LookML view
parameter. Each view’s fields are grouped in the field picker. The type of field (dimension, dimension group, measure, filter field, and parameter field) determines where the field is shown within the view. Unless the label
parameter is used to alter the display name, the field picker will show a formatted version of the LookML view or field name, with underscores changed to spaces and each word capitalized.
This example shows how the names of a view, dimension, dimension group, and measure show by default. The same formatting will apply to the names of any filter fields and parameter fields.
If you change the name of a field, the field picker adapts to the new name but you might want to add an alias
parameter with the field’s old name. That parameter provides alternative names for a field that might appear in the URL for a query. It can be useful in cases when field names in a model change, but some people have shared links to data and you want to keep those pre-existing URLs functioning.
view_label
(Explore)
An Explore’s view_label
parameter changes how the group of fields from an Explore’s base view is labeled in the field picker without changing how the Explore and its fields are referenced in LookML.
view_label
(Join)
A join’s view_label
parameter lets you group fields from one view under another view’s name in the field picker. If you do not explicitly add a view_label
to a join, the view_label
defaults to the name of the join.
label
(View)
A view’s label
parameter changes how the view is labeled in the field picker without changing how it is referenced in LookML. If not specified, the label defaults to the name of the view. Unlike view_label
(Explore) and view_label
(Join), this parameter affects all Explores that use the view.
description
(Field)
You can add a description
to any field. The user can see this description in multiple places. In the field picker, Looker displays this description if a user hovers over the help link. It is also displayed when the user hovers over the column name in a table or table chart visualization in an Explore page, a dashboard, or a Look.
hidden
(Field)
By default, fields specified in a view will be displayed in the field picker. The field’s hidden
parameter hides a field in the field picker.
Hidden fields can still be accessed in the UI if they are manually added to the URL and will show up in some Looker windows. Therefore, think of hidden
as a way to keep the field picker clean, and not as a security feature.
fields
(Field)
The fields
parameter enables you to specify which fields from an Explore’s base view and joins are exposed in the field picker. If you do not use the fields
parameter, Looker exposes all fields.
To specify the fields you want to display in the field picker, use a comma-separated list of fields or sets. To specify the fields you want to hide from the field picker, use the set ALL_FIELDS*
with a comma-separated list of the fields or sets to be excluded, with a hyphen (-) preceding each.
label
(Field)
A field’s label
parameter lets you change how a field name will appear in the field picker without changing how it is referenced in LookML. If no label is specified, the label defaults to the name of the field.
view_label
(Field)
A field’s view_label
parameter lets you change the name of the view under which the field is listed in the field picker without changing how it is referenced in LookML. If no label is specified, the field appears under the label for the view in which it is defined.
group_label
(Field)
The group_label
parameter lets you combine fields together in a common drop-down list within a view in the field picker.
group_item_label
(Field)
When fields are shown under a group label, they might not need to have their full field names or labels displayed in the field picker. In the example shown above for group_label
(Field), the group label already indicates that the fields pertain to shipping. In cases like this, you can add the group_item_label
parameter to each of these grouped fields to change how they look under the group label:
This page provides an overview of LookML parameters that affect filter suggestions. This page lists each parameter with a link to its full reference page, a short description of its function, and an image of what it does.
Default Behavior
By default, in a Look or an Explore, when a user enters a filter value for a dimension of type: string
, Looker suggests options that match what the user enters. On a dashboard, if the filter is a Field type filter and the field it is linked to is of type: string
, Looker also suggests filter options. These filter suggestions are created using a SELECT DISTINCT query on the dimension, so the suggestions will only return values that match existing data for that field:
Enabling or Disabling Filter Suggestions
This section describes the following LookML parameters that let you enable or disable filter suggestions:
suggestable
(Field)
Setting suggestable
to no
lets you disable suggestions for a dimension or filter. The default value of suggestable
is yes
, so Looker provides suggestions to the user when possible. If you set suggestable
to no
instead, Looker does not make suggestions.
suggestions
(View)
Setting suggestions
to no
lets you disable suggestions for all the dimensions in a view. The default value of suggestions
is yes
, so Looker provides suggestions to the user when possible. If you set suggestions
to no
instead, Looker does not make suggestions.
Filter Suggestion Values
This section describes the following LookML parameters that let you set or restrict the values shown in filter suggestions:
allowed_value
(Field)bypass_suggest_restrictions
(Field)case
(Field)full_suggestions
(Field)suggest_dimension
(Field) andsuggest_explore
(Field)suggestions
(Field)
allowed_value
(Field)
allowed_value
works with the parameter
LookML parameter. If a Looker developer provides a parameter
filter-only field, the values entered or selected by users can be referenced using a {% parameter %} Liquid variable. Optionally, you can use allowed_value
to define a list of values a user can choose for that parameter field.
allowed_value
has two subparameters, label
and value
. The label
subparameter specifies the options the user will see in the filter suggestions. The value
subparameter contains the values that are passed to the {% parameter %}
Liquid variable.
For more information, see this section of the parameter
reference page.
bypass_suggest_restrictions
(Field)
bypass_suggest_restrictions
enables filter suggestions in situations where they would otherwise be disabled or limited. If you have used sql_always_where
or access_filter
to restrict the rows users can see, then Looker also restricts the filter suggestions to the values in the permitted rows. However, if you’re certain that there are no possible values in a particular dimension or filter field that would reveal sensitive information, you can use bypass_suggest_restrictions
to reinstate the full set of filter suggestions.
case
(Field)
case
enables you to bucket a dimension’s results with case logic. This impacts filter suggestions because only the values defined in the case
statement are shown as filter suggestions for that dimension.
full_suggestions
(Field)
When full_suggestions
is set to no
, it disables filter suggestions for a dimension.
suggest_dimension
(Field) and suggest_explore
(Field)
Looker typically generates filter suggestions by executing a SELECT DISTINCT query on the filter field. For some large tables this query can be too slow or create too large a database load. You can use suggest_dimension
to make Looker query an alternative dimension for filter suggestions, in combination with suggest_explore
if that dimension lives in a different Explore.
suggestions
(Field)
suggestions
lets you hard-code a list of possible filter suggestion values. This can be useful if you do not want certain data values in a field to appear as suggestions, and instead wish to specify a more limited list.
Caching Filter Suggestions
This section describes the suggest_persist_for
parameter, which lets you configure the length of time that filter suggestions are cached.
If you wish to reduce the load on your database and the number of data values for a dimension is very high, consider using a parameter to disable filter suggestions for your dimension. If you want filter suggestions to appear, consider using a parameter to limit the filter values that are queried or to hard-code the appropriate options.
suggest_persist_for
(Field)
By default, filter suggestions are cached for 6 hours, resulting in the same list of suggestions for that length of time. The suggest_persist_for
parameter enables you to change how long filter suggestions are cached.
Looker Blocks™ are pre-built data models for common analytical patterns and data sources. Reuse the work others have already done rather than starting from scratch, then customize the blocks to your exact specifications. From optimized SQL patterns to fully built-out data models, Looker Blocks can be used as a starting point for quick and flexible data modeling in Looker.
Available Blocks
There are many Looker Blocks to choose from. To see what blocks are currently available, check out the Blocks Directory or select a category below:
- Analytic Blocks: Best-practice design patterns for various types of analysis
- Source Blocks: Analytics for a third-party data source (for example, Salesforce, Zendesk, Stripe), based on the schemas produced by Looker’s ETL partners
- Data Blocks: Pre-modeled public data you can add to your models (see below)
- Data Tool Blocks: Techniques for specific types of data analysis
- Viz Blocks: Custom visualization types you can use to display your query output (see below)
- Embedded Blocks: Techniques for embedding data into custom applications
Looker blocks were created to make analyzing data easier and more efficient. They are available on an “as is” basis, meaning there will not be updates moving forward. Looker cannot represent or warrant that this data will be accurate, reliable, or error-free. Please consult the documented data source sites for details on how their data is being collected and how to interpret it.
Click a block that interests you to see its specific usage instructions.
Some Looker Blocks can be quickly and easily installed using the Looker Marketplace. If you are deploying a block through the Looker Marketplace, your Looker admin must have enabled both the Marketplace and the Local Project Import Labs features. See the Looker Marketplace documentation page for more information about installing and customizing Looker Blocks available from the Looker Marketplace.
Standardization and Customization
To build on an existing block, your Looker admin must enable the Local Project Import Labs feature.
The ease of using the different blocks will vary, depending on the degree to which your database schema is standardized. Most Looker Blocks require some customization to fit your data schema, with the exception of data blocks, which are the simplest to implement but are not customizable.
Data blocks, which include both public datasets and full LookML models, simply require copying the LookML model from the GitHub repo to access the modeled tables. See below for detailed instructions.
Data collection applications, such as Segment and Snowplow, track events in a relatively standardized format. This makes it possible to create templatized design patterns — capable of data cleansing, transformation, and analytics — that can be used by any customer using these applications.
Other web applications — including Salesforce and Marketo — let you add custom fields for your internal users. Naturally, this creates data in a less standardized format. As a result, we can templatize some of the data model to get the analytics up and running, but you’ll need to customize the non-standardized portion.
Finally, we have blocks for general business insights. These are optimized SQL or LookML design patterns that are data-source agnostic. For example, many companies will want to analyze the lifetime value of a customer over time. There are some assumptions baked into these patterns, but they can be customized to match your specific business needs. These patterns reflect Looker’s point of view on the best way to conduct certain types of analysis.
If you’re new to Looker, your Looker analyst can help you get the most from these models.
Adding Blocks to Your LookML
- Some blocks demonstrate both Explores and views in the same file. This is for ease of viewing, but in general you’ll want to copy the appropriate sections of LookML into the appropriate places in your data model. See Understanding Model and View Files for more information.
In some cases you’ll probably want to create new LookML files in your data model to house the examples.
Using Data Blocks
Data blocks are special types of Looker Blocks that provide the dataset as well as the data model. Looker Data Blocks include public data sources, such as:
- Demographic data: Common demographic metrics from the American Community Survey at the state, county, ZIP code tabulation area, and even census block group level
- Economic indicator data: Information on key United States economic indicators (provided by Quandl) focusing on inflation, unemployment rates, interest rates, debt, and growth indicators
- Exchange rate data: Daily closing historical exchange rates for major currencies going back to the introduction of the Euro in 1999
- Geographic mapping data: Mapping files that make it easy to translate between different geographic areas like block group, census tract, ZIP code, county, and state within the United States
- Weather data daily: Weather reporting in the United States at the ZIP code level from 1920 through the previous day
The Weather and Exchange Rate blocks are updated nightly. The other datasets are fairly static and will require minimal updates.
To see the full list of currently available blocks, see the Data Block category of the Blocks Directory.
Accessing Datasets on Different Databases
The procedure for accessing a data block’s dataset varies depending on your database schema. The sections below contain instructions for accessing datasets on these databases:
Accessing Datasets on Google BigQuery
If you have an existing Google BigQuery account, you can access our BigQuery-hosted datasets. Skip ahead to the Adding Data Blocks to Projects section of this page.
If you do not already have a Google BigQuery account, you can set up a free trial and then access Looker’s public datasets on BigQuery.
Google BigQuery does not support project sharing across regions. To directly access Data Blocks in Google BigQuery from outside the U.S. region, you can:
- Import Looker’s public data from Google Cloud Services or Amazon S3.
- Create a new Google BigQuery connection based in the US region.
Accessing Datasets on Snowflake
You can access a Looker-hosted Snowflake data warehouse and use data sharing to get access to Looker’s public datasets.
Snowflake does not support project sharing across regions. To directly access Data Blocks in Snowflake from outside the U.S. region, you can:
- Import Looker’s public data from Google Cloud Services or Amazon S3.
- Create a new Snowflake connection based in the US region.
First, reach out to your Looker analyst with the following information:
- Customer name:
<Company1>
- Customer Looker instance:
<instance_name.looker.com>
- Snowflake account name:
<company1.snowflakecomputing.com>
Once Looker’s Ops team provides the Snowflake account with access to the data blocks, run the following commands to get access to Looker’s public datasets:
If you are using one of Looker’s public datasets with your own Snowflake data warehouse, your Looker user must have database write privileges to create PDTs. See the Snowflake configuration documentation page for more information about connecting Looker to Snowflake.
Accessing Datasets on Other Databases
Are you on Amazon Redshift? MySQL? PostgreSQL? Oracle?
We’ve made the transformed data for each of these datasets publicly available in both GCS and S3 so that you can directly import them into the database of your choice.
We’ve also made the Data Definition Language (DDL) available for each of the datasets in the GitHub Repo. The DDL statements might need to be modified for the datatypes in your selected database, but should provide an idea of the column types for each table.
Download data directly from one of these locations:
- GCS:
_gs://looker-datablocks/_
- S3:
_s3://looker-datablocks/_
- S3 Bucket Web Link: http://looker-datablocks.s3-website-us-east-1.amazonaws.com/
Accessing the LookML Model
Fork one of our GitHub repos into a new GitHub repo (either hosted by Looker or by your company) that you can then extend or refine within your instance:
- Demographic Data (American Community Survey) - https://github.com/llooker/datablocks-acs
- Weather (GSOD) - https://github.com/llooker/datablocks-gsod
- Financial Indicators - https://github.com/llooker/datablocks-finance
- Exchange Rates - https://github.com/llooker/datablocks-exchangerate
Adding Data Blocks to Projects
In addition to the method described in this section, you can also use LookML refinements to build on the LookML of views and Explores in your projects.
To build on a data block, your Looker admin must enable the Local Project Import Labs feature. If you are deploying a block through the Looker Marketplace, your Looker admin must have enabled both the Marketplace and the Local Project Import Labs features.
To add a data block to your project:
Add a new project to your Looker instance.
Fork or copy the GitHub repos mentioned above to access prebuilt LookML. Be sure to create a new GitHub repo.
Remove other database dialect files from the repo. Looker Blocks will typically contain files for Google BigQuery, Amazon Redshift, and Snowflake. For example, if you are setting up data blocks on Google BigQuery, you will only need the Google BigQuery view files, Google BigQuery Explore file, and Google BigQuery model file.
Replace the connection name in your model file with your database connection where the data for data blocks lives, or, in the case of Google BigQuery and Snowflake, with your existing database connection from which you will be extending or refining:
All join logic exists in an
.explore
file in each of the repositories. This is the file you will be including in the following steps, after you have set up your project manifest.In your main Looker project where you will be extending or refining data blocks, create a project manifest file:
Add the following LookML to the project manifest file to reference data blocks in your main Looker project:
project_name: "<your_project_name>"local_dependency: {project: "<project_name_of_datablock>"}
Setup Considerations and Options
Google BigQuery: Be sure to use the correct set of modeled files. If you are on Google BigQuery you may want to reference all files with _bq_
in the file name. You may have to adapt our Google BigQuery model dialects to your own database dialect.
Google BigQuery does not support project sharing across regions. To directly access Data Blocks in Google BigQuery from outside the U.S. region, you can:
- Import Looker’s public data from Google Cloud Services or Amazon S3.
- Create a new Google BigQuery connection based in the US region.
Extensions: All of our projects have been set up to allow for extensions from Explore files, since model extension could cause issues with multiple connections.
Joining Derived Tables: You may want to take a look at our documentation for native derived tables. You can let Looker write SQL for you at different levels of aggregation on our publicly available datasets and join them into your model.
Merging Result Sets: You can also choose to merge result sets from our datasets with your data by combining query result sets. View this short video to learn more about merging data.
Example Setup of the Demographic Dataset
Get access to data by either downloading raw data from our S3 or GCS buckets or by connecting to a Looker database.
Import the Demographic Data Block model from LookML as a separate project in your Looker instance:
Use the
include
parameter to bring in the view file.Then either extend or refine the view file, or make use of native derived tables to get data at the level of aggregation that is necessary for Explores.
In our example, since the demographic data is at a different level of aggregation than our e-commerce dataset (block group vs. zip code) we use native derived tables to aggregate stats up to the zip code level. This eliminates messy many-to-many joins:
include: "/american_community_survey/bq.explore"view: zipcode_income_facts {derived_table: {persist_for: "10000 hours"explore_source: fast_facts {column: ZCTA5 { field: tract_zcta_map.ZCTA5 }column: income_household { field: bg_facts.avg_income_house }column: total_population { field: bg_facts.total_population }}}dimension: ZCTA5 {}dimension: income_household {hidden: yes}Join view files into the model:
include: "acs*.view"explore: order_items {join: users {sql_on: ${users.id} = ${order_items.user_id} ;;type: left_outerrelationship: many_to_one}join: zipcode_income_facts {sql_on: ${users.zip} = ${zipcode_income_facts.ZCTA5} ;;type: left_outerrelationship: many_to_one}}Explore your data:
Visualize your data:
Using Viz Blocks
Looker includes a variety of native visualization types. However, if, you have charting needs that are not covered by Looker’s native visualization types, you can also add your own custom visualization types. You can also develop a custom visualization and make it available to all Looker users from the Looker Marketplace.
Viz Blocks are pre-built JavaScript visualization types that are hosted by Looker. You can add the Viz Blocks to your Looker instance, and they will act similarly to any of Looker’s native visualization types: they appear on the visualization menu bar, and they include core functionality such as drilling, downloading, embedding, and scheduling.
To learn more about a Viz Block, select the visualization type in the Viz Block category of the Blocks Directory, then click See the Code and navigate to the Viz Block’s READ.ME
file. The READ.ME
file shows an example of the visualization and gives more information about the Viz Block. For some visualizations the READ.ME
file also provides a URL and instructions for adding the Viz Block.
To add the visualization type to your instance, see the instructions in the READ.ME
file (if any) and the information on our Visualizations documentation page.
Casino Secret (c) no deposit bonus
ReplyDeleteCasino Secret is one 온라인카지노 of the カジノ シークレット best online casinos in the world that offer free spins and no deposit bonuses. With over 2,000 titles and 300+ games クイーンカジノ to choose from and to