WHY CONFIGURE ROW- OR OBJECT-LEVEL SECURITY?
- Configuring RLS or OLS can be benificial for your model & reporting:
+ Configuring RLS or OLS can be beneficial for your model & reporting:
Reduce risk and improve governance by ensuring users only see data they have access to.
Configure dynamic RLS with central role tables for consistency and lightweight maintenance.
Have granular control over what data and objects can be queried.
diff --git a/content/tutorials/incremental-refresh/incremental-refresh-setup.md b/content/tutorials/incremental-refresh/incremental-refresh-setup.md
index f6d85719..7f3a6687 100644
--- a/content/tutorials/incremental-refresh/incremental-refresh-setup.md
+++ b/content/tutorials/incremental-refresh/incremental-refresh-setup.md
@@ -217,7 +217,7 @@ If you have configured a native query, it may still be possible to configure and
incremental-refresh-native-query-formatted.png
-2. __Replace the Native Query String in the Source Expression:__ Copy the query and replace the existing query, which will be full of characters like (lf) (line feed), (cr) (carraige return) and (n) (new line). Doing this makes the query actually readable and editable without resorting to the Native Query user interface of Power BI Desktop.
+2. __Replace the Native Query String in the Source Expression:__ Copy the query and replace the existing query, which will be full of characters like (lf) (line feed), (cr) (carriage return) and (n) (new line). Doing this makes the query actually readable and editable without resorting to the Native Query user interface of Power BI Desktop.
diff --git a/content/tutorials/new-as-model.md b/content/tutorials/new-as-model.md
index 6de11a03..fce1929a 100644
--- a/content/tutorials/new-as-model.md
+++ b/content/tutorials/new-as-model.md
@@ -30,7 +30,7 @@ This page walks you through the process of creating a new Analysis Services tabu

-- Provide a name for your model or use the default value. Then, choose the compatibility level depending on which version of Analysis Services you are targetting. Your options are the following:
+- Provide a name for your model or use the default value. Then, choose the compatibility level depending on which version of Analysis Services you are targeting. Your options are the following:
- 1200 (Works with SQL Server 2016 or newer, and Azure Analysis Services)
- 1400 (Works with SQL Server 2017 or newer, and Azure Analysis Services)
- 1500 (Works with SQL Server 2019 or Azure Analysis Services)
@@ -48,11 +48,11 @@ Once your model is created, the next step is to add a data source and some table
#### Adding a data source and tables
-Before you can import data to your tabular model, you have to set up one or more data sources. Locate the TOM Explorer, right-click on the "Data Sources" folder and choose "Create". For a model that uses compatibility level 1400 or higher, we have two options: Legacy and Power Query data sources. To learn more about th differences between these two types of data sources, [consult the Microsoft Analysis Services blog](https://docs.microsoft.com/en-us/archive/blogs/analysisservices/using-legacy-data-sources-in-tabular-1400).
+Before you can import data to your tabular model, you have to set up one or more data sources. Locate the TOM Explorer, right-click on the "Data Sources" folder and choose "Create". For a model that uses compatibility level 1400 or higher, we have two options: Legacy and Power Query data sources. To learn more about the differences between these two types of data sources, [consult the Microsoft Analysis Services blog](https://docs.microsoft.com/en-us/archive/blogs/analysisservices/using-legacy-data-sources-in-tabular-1400).

-In this example, we will create a Power Query data source, which we will use to import a few tables from a SQL Server relational database. Once the data source is created, hit F2 to rename it and configure the data source using the Propery Grid as seen in the screenshot below:
+In this example, we will create a Power Query data source, which we will use to import a few tables from a SQL Server relational database. Once the data source is created, hit F2 to rename it and configure the data source using the Property Grid as seen in the screenshot below:

diff --git a/content/tutorials/udfs.md b/content/tutorials/udfs.md
index 1f1002c0..3b3f4433 100644
--- a/content/tutorials/udfs.md
+++ b/content/tutorials/udfs.md
@@ -202,7 +202,7 @@ When you select multiple UDFs in the TOM Explorer, you can use the **Batch Renam
### Namespaces
-The concept of "namespace" doesn't exist in DAX, yet the recommendation is to name UDFs in such a way that ambiguities are avoided and that the origin of the UDF is clear. For example `DaxLib.Convert.CelsiusToFahrenheit` (using '.' as namespace separators). When a UDF is named this way, the TOM Explorer will display the UDF in a hierarchy based on the names. You can toggle the display of UDFs by namespace using the **Group User-Defined Functions by namespace** tuggle button in the toolbar above the TOM Explorer (note, this button is only visible when working with a model using Compatibility Level 1702 or higher).
+The concept of "namespace" doesn't exist in DAX, yet the recommendation is to name UDFs in such a way that ambiguities are avoided and that the origin of the UDF is clear. For example `DaxLib.Convert.CelsiusToFahrenheit` (using '.' as namespace separators). When a UDF is named this way, the TOM Explorer will display the UDF in a hierarchy based on the names. You can toggle the display of UDFs by namespace using the **Group User-Defined Functions by namespace** toggle button in the toolbar above the TOM Explorer (note, this button is only visible when working with a model using Compatibility Level 1702 or higher).

diff --git a/data/common_typos.json b/data/common_typos.json
new file mode 100644
index 00000000..6ceb1c02
--- /dev/null
+++ b/data/common_typos.json
@@ -0,0 +1,47 @@
+{
+ "description": "Common typos found in technical documentation. Use for simple pattern-based spellcheck.",
+ "version": "1.0.0",
+ "last_updated": "2026-02-01",
+ "typos": [
+ {"wrong": "noticable", "correct": "noticeable", "category": "common"},
+ {"wrong": "succesful", "correct": "successful", "category": "common"},
+ {"wrong": "occurence", "correct": "occurrence", "category": "common"},
+ {"wrong": "occured", "correct": "occurred", "category": "common"},
+ {"wrong": "surronding", "correct": "surrounding", "category": "common"},
+ {"wrong": "seemless", "correct": "seamless", "category": "common"},
+ {"wrong": "suported", "correct": "supported", "category": "common"},
+ {"wrong": "reciding", "correct": "residing", "category": "common"},
+ {"wrong": "elipsis", "correct": "ellipsis", "category": "common"},
+ {"wrong": "defind", "correct": "defined", "category": "common"},
+ {"wrong": "chooseing", "correct": "choosing", "category": "common"},
+ {"wrong": "ressource", "correct": "resource", "category": "common"},
+ {"wrong": "prefered", "correct": "preferred", "category": "common"},
+ {"wrong": "Pipeliens", "correct": "Pipelines", "category": "transposed"},
+ {"wrong": "upate", "correct": "update", "category": "missing_letter"},
+ {"wrong": "wheras", "correct": "whereas", "category": "common"},
+ {"wrong": "seperate", "correct": "separate", "category": "common"},
+ {"wrong": "definately", "correct": "definitely", "category": "common"},
+ {"wrong": "occurance", "correct": "occurrence", "category": "common"},
+ {"wrong": "accomodate", "correct": "accommodate", "category": "common"},
+ {"wrong": "blick", "correct": "click", "category": "adjacent_key"},
+ {"wrong": "levells", "correct": "levels", "category": "double_letter"},
+ {"wrong": "colapse", "correct": "collapse", "category": "missing_letter"},
+ {"wrong": "collaps", "correct": "collapse", "category": "missing_letter"},
+ {"wrong": "defualt", "correct": "default", "category": "transposed"},
+ {"wrong": "compatability", "correct": "compatibility", "category": "common"},
+ {"wrong": "avaliable", "correct": "available", "category": "transposed"},
+ {"wrong": "necesarily", "correct": "necessarily", "category": "missing_letter"},
+ {"wrong": "highligting", "correct": "highlighting", "category": "missing_letter"},
+ {"wrong": "descripions", "correct": "descriptions", "category": "missing_letter"},
+ {"wrong": "parition", "correct": "partition", "category": "missing_letter"},
+ {"wrong": "oyu", "correct": "you", "category": "transposed"}
+ ],
+ "false_positives": [
+ "table-bordered",
+ "table-striped",
+ "table-condensed",
+ "Power Query query",
+ "Column column",
+ "Table table"
+ ]
+}
diff --git a/scripts/ci_spellcheck.py b/scripts/ci_spellcheck.py
new file mode 100644
index 00000000..8f5af401
--- /dev/null
+++ b/scripts/ci_spellcheck.py
@@ -0,0 +1,396 @@
+#!/usr/bin/env python3
+"""
+CI Typo Check - Detect known typos in English markdown files.
+
+This script provides a reliable, fast check for 100% confirmed typos.
+It scans English markdown files in the content/ directory while skipping
+code blocks, inline code, YAML frontmatter, and localized content.
+
+For more nuanced checks, use cspell or LLM-based spellcheck.
+"""
+
+import json
+import os
+import re
+import sys
+from pathlib import Path
+from typing import NamedTuple, TypedDict
+
+
+class TypoEntry(TypedDict):
+ """A typo pattern entry."""
+ wrong: str
+ correct: str
+ category: str
+
+
+class TypoData(TypedDict, total=False):
+ """Typo data file structure."""
+ description: str
+ version: str
+ typos: list[TypoEntry]
+ false_positives: list[str]
+
+
+class TypoMatch(NamedTuple):
+ """A typo match found in a file."""
+ file: Path
+ line_num: int
+ line_text: str
+ typo: str
+ correction: str
+
+
+class CompiledTypo(NamedTuple):
+ """A precompiled typo pattern."""
+ pattern: re.Pattern[str]
+ wrong: str
+ correct: str
+
+
+# Directories to skip (pruned before traversal for performance)
+EXCLUDED_DIRS = {
+ ".git", ".github", ".vscode", ".idea",
+ "node_modules", "venv", ".venv", "__pycache__",
+ "site-packages", "dist", "build", ".cache",
+ "_site", "public", "output", # Generated site output
+ "localizedContent", # Skip non-English localized content
+}
+
+
+def load_and_validate_typo_data(data_path: Path) -> TypoData:
+ """Load and validate typo patterns from JSON file."""
+ try:
+ with open(data_path, "r", encoding="utf-8") as f:
+ data = json.load(f)
+ except json.JSONDecodeError as e:
+ raise ValueError(f"Invalid JSON in {data_path}: {e}") from e
+
+ # Validate structure
+ if "typos" not in data:
+ raise ValueError(f"Missing 'typos' key in {data_path}")
+ if not isinstance(data["typos"], list):
+ raise TypeError(f"'typos' must be a list in {data_path}")
+
+ for i, entry in enumerate(data["typos"]):
+ if not isinstance(entry, dict):
+ raise TypeError(f"Typo entry {i} must be a dict")
+ if "wrong" not in entry or "correct" not in entry:
+ raise ValueError(f"Typo entry {i} missing 'wrong' or 'correct' key")
+ # Validate string types and non-empty values
+ if not isinstance(entry["wrong"], str) or not isinstance(entry["correct"], str):
+ raise TypeError(f"Typo entry {i} 'wrong' and 'correct' must be strings")
+ if not entry["wrong"].strip() or not entry["correct"].strip():
+ raise ValueError(f"Typo entry {i} 'wrong' and 'correct' must be non-empty")
+
+ return data
+
+
+def compile_typo_patterns(typos: list[TypoEntry]) -> list[CompiledTypo]:
+ """Precompile regex patterns for all typos (done once, not per file)."""
+ compiled: list[CompiledTypo] = []
+ for typo in typos:
+ pattern = re.compile(rf"\b{re.escape(typo['wrong'])}\b", re.IGNORECASE)
+ compiled.append(CompiledTypo(
+ pattern=pattern,
+ wrong=typo["wrong"],
+ correct=typo["correct"],
+ ))
+ return compiled
+
+
+def compile_false_positive_patterns(false_positives: list[str]) -> list[re.Pattern[str]]:
+ """Precompile regex patterns for false positives with word boundaries."""
+ return [
+ re.compile(rf"\b{re.escape(fp)}\b", re.IGNORECASE)
+ for fp in false_positives
+ ]
+
+
+def strip_code_blocks(content: str, strip_inline: bool = True) -> list[tuple[int, str]]:
+ """
+ Return lines with code blocks removed.
+
+ Returns list of (original_line_num, line_text) tuples,
+ skipping lines inside fenced code blocks, indented code blocks,
+ and YAML frontmatter.
+
+ Args:
+ content: The markdown content to process.
+ strip_inline: If True, also strip inline code (`code`). Set to False
+ for repeated word detection to avoid false positives
+ like "as `code` as" being detected as "as as".
+ """
+ # Strip UTF-8 BOM if present (appears at start of some files)
+ if content.startswith("\ufeff"):
+ content = content[1:]
+
+ lines = content.splitlines()
+ result: list[tuple[int, str]] = []
+ in_fenced_block = False
+ in_frontmatter = False
+ prev_blank = True # Track if previous line was blank (for indented code detection)
+
+ for line_num, line in enumerate(lines, start=1):
+ stripped = line.strip()
+
+ # Handle YAML frontmatter (must start at line 1, allow leading whitespace)
+ if line_num == 1 and stripped == "---":
+ in_frontmatter = True
+ continue
+ if in_frontmatter:
+ if stripped == "---" or stripped == "...":
+ in_frontmatter = False
+ continue
+
+ # Check for fenced code block markers (``` or ~~~)
+ if stripped.startswith("```") or stripped.startswith("~~~"):
+ in_fenced_block = not in_fenced_block
+ prev_blank = False
+ continue
+
+ if in_fenced_block:
+ prev_blank = False
+ continue
+
+ # Skip indented code blocks (4+ spaces or tab after blank line)
+ # Per CommonMark: indented code requires preceding blank line
+ is_indented_code = (
+ prev_blank and
+ len(line) > 0 and
+ (line.startswith(" ") or line.startswith("\t"))
+ )
+ if is_indented_code:
+ # Don't update prev_blank - stay in indented code mode
+ continue
+
+ # Track blank lines for indented code detection
+ prev_blank = len(stripped) == 0
+
+ if stripped: # Non-empty, non-code line
+ if strip_inline:
+ # Strip inline code: handle both `code` and ``code with `backticks` inside``
+ # First handle double-backtick spans, then single-backtick spans
+ line_no_inline = re.sub(r"``[^`]+``", "", line)
+ line_no_inline = re.sub(r"`[^`]+`", "", line_no_inline)
+ result.append((line_num, line_no_inline))
+ else:
+ result.append((line_num, line))
+
+ return result
+
+
+def find_typos_in_lines(
+ file_path: Path,
+ lines: list[tuple[int, str]],
+ compiled_typos: list[CompiledTypo],
+ fp_patterns: list[re.Pattern[str]],
+) -> list[TypoMatch]:
+ """Search lines for known typos."""
+ matches: list[TypoMatch] = []
+
+ for line_num, line in lines:
+ for compiled in compiled_typos:
+ if compiled.pattern.search(line):
+ # Check if this line matches a false positive pattern
+ if any(fp_pat.search(line) for fp_pat in fp_patterns):
+ continue
+
+ matches.append(TypoMatch(
+ file=file_path,
+ line_num=line_num,
+ line_text=line.strip(),
+ typo=compiled.wrong,
+ correction=compiled.correct,
+ ))
+
+ return matches
+
+
+def find_repeated_words(
+ file_path: Path,
+ lines: list[tuple[int, str]],
+ fp_patterns: list[re.Pattern[str]],
+) -> list[TypoMatch]:
+ """Find repeated words like 'the the' or 'is is'."""
+ matches: list[TypoMatch] = []
+
+ # Catch any repeated word of 2+ chars (more comprehensive)
+ repeated_pattern = re.compile(r"\b(\w{2,})\s+\1\b", re.IGNORECASE)
+
+ for line_num, line in lines:
+ # Skip lines that match false positive patterns
+ if any(fp_pat.search(line) for fp_pat in fp_patterns):
+ continue
+
+ # Use finditer to catch ALL repeated words on a line
+ for match in repeated_pattern.finditer(line):
+ matches.append(TypoMatch(
+ file=file_path,
+ line_num=line_num,
+ line_text=line.strip(),
+ typo=match.group(0),
+ correction=f"{match.group(1)} (remove duplicate)",
+ ))
+
+ return matches
+
+
+def is_safe_path(file_path: Path, project_root: Path) -> bool:
+ """Check if path is safely within project root (no symlink escapes)."""
+ try:
+ resolved = file_path.resolve()
+ root_resolved = project_root.resolve()
+ return resolved.is_relative_to(root_resolved)
+ except (OSError, ValueError):
+ return False
+
+
+def escape_gha_annotation(text: str) -> str:
+ """Escape GitHub Actions workflow command control characters.
+
+ GHA uses %, :, and newlines as control characters in annotations.
+ Escaping prevents malicious input from injecting extra annotations.
+ """
+ return (
+ text
+ .replace("%", "%25")
+ .replace("\r", "%0D")
+ .replace("\n", "%0A")
+ .replace(":", "%3A")
+ )
+
+
+def find_markdown_files(project_root: Path) -> list[Path]:
+ """
+ Find markdown files using os.walk with directory pruning.
+
+ Scans the entire project root while excluding build artifacts,
+ version control directories, and non-English localized content.
+
+ This is more efficient than rglob() because it prunes excluded
+ directories BEFORE traversing into them.
+ """
+ md_files: list[Path] = []
+
+ for dirpath, dirnames, filenames in os.walk(project_root):
+ # Prune excluded directories IN PLACE (modifies dirnames)
+ # This prevents os.walk from descending into them
+ dirnames[:] = [
+ d for d in dirnames
+ if d not in EXCLUDED_DIRS and not d.startswith(".")
+ ]
+
+ # Collect markdown files
+ for filename in filenames:
+ if filename.endswith(".md"):
+ file_path = Path(dirpath) / filename
+ # Safety check for symlinks
+ if is_safe_path(file_path, project_root):
+ md_files.append(file_path)
+
+ return md_files
+
+
+def main() -> int:
+ """Run spellcheck and return exit code."""
+ # Find project root (where data/common_typos.json lives)
+ script_dir = Path(__file__).parent
+ project_root = script_dir.parent
+
+ data_path = project_root / "data" / "common_typos.json"
+ if not data_path.exists():
+ print(f"::error::Typo data not found at {data_path}", file=sys.stderr)
+ return 1
+
+ # Load and validate typo data
+ try:
+ typo_data = load_and_validate_typo_data(data_path)
+ except (ValueError, TypeError) as e:
+ print(f"::error::Invalid typo data file: {e}", file=sys.stderr)
+ return 1
+
+ typos = typo_data.get("typos", [])
+ false_positives = typo_data.get("false_positives", [])
+
+ # Precompile all patterns ONCE (not per file)
+ compiled_typos = compile_typo_patterns(typos)
+ fp_patterns = compile_false_positive_patterns(false_positives)
+
+ print(f"Loaded {len(typos)} typo patterns")
+ print(f"Loaded {len(false_positives)} false positive exclusions")
+ print()
+
+ # Find markdown files with directory pruning
+ md_files = find_markdown_files(project_root)
+
+ print(f"Scanning {len(md_files)} markdown files...")
+ print()
+
+ # Search for typos
+ all_matches: list[TypoMatch] = []
+
+ for md_file in md_files:
+ try:
+ # Read file ONCE
+ content = md_file.read_text(encoding="utf-8")
+ except (OSError, UnicodeDecodeError) as e:
+ print(f"Warning: Could not read {md_file}: {e}", file=sys.stderr)
+ continue
+
+ # Strip code blocks and inline code for typo detection
+ lines_stripped = strip_code_blocks(content, strip_inline=True)
+
+ # Find typos (pass preprocessed lines with inline code stripped)
+ matches = find_typos_in_lines(md_file, lines_stripped, compiled_typos, fp_patterns)
+ all_matches.extend(matches)
+
+ # For repeated words, keep inline code to avoid false positives
+ # like "as `code` as" being detected as "as as"
+ lines_with_inline = strip_code_blocks(content, strip_inline=False)
+ repeated = find_repeated_words(md_file, lines_with_inline, fp_patterns)
+ all_matches.extend(repeated)
+
+ # Report results
+ if all_matches:
+ print("=" * 60)
+ print("TYPOS FOUND")
+ print("=" * 60)
+ print()
+
+ # Group by file
+ by_file: dict[Path, list[TypoMatch]] = {}
+ for match in all_matches:
+ by_file.setdefault(match.file, []).append(match)
+
+ for file_path, matches in sorted(by_file.items()):
+ rel_path = file_path.relative_to(project_root)
+ print(f"File: {rel_path}")
+ for match in matches:
+ # GitHub Actions annotation format for inline PR comments
+ # Escape control characters to prevent annotation injection
+ safe_typo = escape_gha_annotation(match.typo)
+ safe_correction = escape_gha_annotation(match.correction)
+ print(f"::error file={rel_path},line={match.line_num}::"
+ f"Typo: '{safe_typo}' should be '{safe_correction}'")
+ print(f" Line {match.line_num}: '{match.typo}' -> '{match.correction}'")
+ # Truncate long lines
+ line_preview = match.line_text[:80]
+ if len(match.line_text) > 80:
+ line_preview += "..."
+ print(f" {line_preview}")
+ print()
+
+ print("=" * 60)
+ print(f"Total: {len(all_matches)} typo(s) found in {len(by_file)} file(s)")
+ print()
+ print("These are 100% reliable typo patterns.")
+ print("If a match is a false positive, add it to data/common_typos.json false_positives list.")
+ return 1
+ else:
+ print(f"Typo check passed: {len(md_files)} files checked, no typos found.")
+ return 0
+
+
+if __name__ == "__main__":
+ sys.exit(main())