Orphaned Rules - 7 Rules
When automating an office, it's essential to use the right tools and techniques for achieving seamless functionality and user satisfaction. The choice between drivers, programming, macros, or schedules depends on the task at hand, the level of flexibility required, and the system's complexity.
Choose Drivers First for Core Functionality
Drivers form the foundation of automation:
-
Always use official Control4-certified drivers when available. They ensure compatibility, reliability, and future updates.
- Official drivers provide additional functionality and can automate simple tasks without the need for programming (e.g. turning lights on with motion sensors)
-
Opt for third-party drivers when:
- Official drivers aren't available
- Specific customizations (e.g., for niche devices) are needed.
Rely on Programming for Custom Behaviors
Control4's Composer programming allows intricate automation for advanced scenarios:
-
Use programing logic to personalize behaviors based on conditions, such as:
- โIf a security key is used to access the building, the alarm is deactivated, turn on reception lights and signageโ
If-then programming scenario:
- If there's no motion after 11 PM: Turn off all lights.
Figure: Good example - Programming creates flexible automation for daily routines.
Use Macros for Grouped Actions
Macros simplify multi-step operations:
-
Combine several automation tasks into a single command.
- Example: โClose officeโ macro:
- Turns off lights.
- Turns off aircon.
- Arms the security system.
- Use macros sparingly to avoid conflicts with individual device settings.
Create Schedules for Predictable Routines
Schedules work best for time-based, repetitive tasks:
-
Configure when tasks occur automatically, such as:
- Lighting scenes transitioning at sunrise or sunset.
- Keep schedules clean and avoid too many overlapping events.
Combine Techniques for Advanced Scenarios
For the best results:
- Use drivers for baseline device integration.
- Add programming for specific conditional behavior.
- Utilize macros for easily triggering grouped actions.
- Set up schedules for repetitive, time-based automation.
Avoid Common Mistakes
- Overusing programming: Too much custom logic can complicate maintenance and troubleshooting.
- Relying solely on macros or schedules: These are useful for simplifying repetitive tasks but cannot handle conditional triggers efficiently.
- Using unsupported drivers: These can break during Control4 OS updates, creating unanticipated downtime.
By choosing the right combination of drivers, programming, macros, and schedules, you can design a robust and easily manageable automation system that enhances the office's usability and efficiency.
-
In todayโs development world, teams often consist of developers working on different operating systems (OS), such as Windows, macOS, and Linux. While each OS has its own strengths, managing a cross-platform development environment can introduce challenges.
Issues like inconsistent line endings, platform-specific setup scripts, and configuration mismatches can lead to headaches for both individual developers and the team as a whole.
Addressing Line Ending differences (CRLF vs. LF)
One of the most common issues faced by teams working with Git across different platforms is the handling of line endings. Windows uses CRLF (Carriage Return + Line Feed) for line endings, while macOS and Linux use LF (Line Feed) only. This can lead to unnecessary diffs in Git and potential merge conflicts.
Solution: Use Gitโs Line Ending configuration
Git provides a way to manage line endings across different operating systems by using the
core.autocrlf
setting. This configuration ensures that line endings are normalized when files are checked in and out of the repository.-
Windows users: Set Git to automatically convert line endings to CRLF when checking out files, and convert them back to LF when committing.
git config --global core.autocrlf true
-
macOS/Linux users: Set Git to automatically convert CRLF line endings to LF when checking out files and leave them as LF when committing.
git config --global core.autocrlf input
-
Repository-Wide configuration: It's a good practice to enforce this configuration across the team via a
.gitattributes
file, which allows you to define how specific file types should be handled. For example:* text=auto *.sh text eol=lf *.bat text eol=crlf
This ensures that, no matter what OS a developer is using, files are checked out and committed with consistent line endings. The
eol
attribute specifically handles cases like batch scripts or shell scripts that may need different line endings.Creating Cross-Platform Setup Scripts for Easy Onboarding
Onboarding new developers is a critical step in ensuring that everyone is up and running quickly, but multi-OS teams often struggle with platform-specific setup instructions. The goal is to make onboarding as seamless as possible, whether a developer is using Windows, macOS, or Linux.
Solution: Write Cross-Platform Setup Scripts Using PowerShell
To streamline onboarding and ensure compatibility across different platforms, itโs crucial to write setup scripts that work on all major operating systems. PowerShell is an ideal choice because it is natively available on Windows and can also be installed on macOS and Linux, making it a truly cross-platform solution. Here's how you can approach writing cross-platform setup scripts with PowerShell:
- PowerShell for Windows, macOS, and Linux: Instead of using separate scripts for each platform, write a PowerShell script (
setup.ps1
) that works on all platforms. PowerShell Core (now simply known as PowerShell) is cross-platform and can be run on Windows, macOS, and Linux, allowing you to write one script for all environments. You can use package managers likeChocolatey
on Windows,Homebrew
on macOS, orapt
/yum
on Linux within the same PowerShell script. - Handling OS-Specific Logic in PowerShell: PowerShell makes it easy to detect the operating system and execute different setup commands depending on the platform. For example, you can check whether the script is running on Windows, macOS, or Linux and then call the appropriate package manager or command for each environment.
Hereโs an example of a cross-platform setup script in PowerShell:
# Detect the OS and perform platform-specific setup $os = $env:OS if ($os -like "*Windows*") { Write-Host "Setting up for Windows..." # Windows-specific setup, e.g., installing packages via Chocolatey choco install somepackage } elseif ($os -like "*Linux*" -or $os -like "*Unix*") { Write-Host "Setting up for Linux/macOS..." # Linux/macOS-specific setup if ($IsMacOS) { # macOS-specific package manager (Homebrew) brew install somepackage } else { # Linux-specific package manager (apt, yum, etc.) sudo apt-get install somepackage } } else { Write-Host "Unsupported OS detected." } Write-Host "Setup complete!"
This script does the following:
- It checks the environment variable
$env:OS
to detect the operating system. -
Depending on the platform, it uses the appropriate package manager:
- Windows: Uses
choco
(Chocolatey) to install software. - macOS: Uses
brew
(Homebrew) for package installation. - Linux: Uses
apt-get
or similar package managers.
- Windows: Uses
By using PowerShell, you can create a single script that works across all major platforms, reducing the need for platform-specific scripts and simplifying the setup process for your users.
Consistent Git configuration for Multi-OS Teams
To further ensure smooth collaboration among multi-OS teams, itโs important to standardize Git configurations. Beyond line endings, Git offers other configurations that help maintain consistency.
Key Git configurations for Multi-OS Teams:
-
User Name and Email: Ensure each developer has set up their user name and email, as this is crucial for committing with correct author information:
git config --global user.name "Your Name" git config --global user.email "youremail@example.com"
-
Global
.gitignore
: A global.gitignore
file can help ensure that certain system files (e.g.,Thumbs.db
on Windows or.DS_Store
on macOS) are ignored across all repositories. You can create and set a global.gitignore
file using the following command:git config --global core.excludesfile ~/.gitignore_global
And in
~/.gitignore_global
:.DS_Store Thumbs.db
- Hooks and Templates: Some repositories might require hooks or commit templates to enforce conventions like conventional commit messages or certain commit checks. Using a
.githooks
or.gitmessage
file in the repository can help maintain consistency across platforms.
-
Structuring and optimizing your TinaCMS project is essential to achieve clarity, enhance performance, and prevent build failures. Poorly optimized projects can lead to slow site performance, increased server load, and even failed builds due to excessive or inefficient data requests.
Letโs explore how to structure your project effectively and apply best practices to boost performance both in runtime and during the build process.
1. Structuring your TinaCMS Architecture
When working with large datasets or generating multiple subcomponents, following best practices is crucial to maintain performance and clarity.
โ Bad practices
-
Using deeply nested schemas with nested references
- Complex and deeply nested schemas increase the complexity of the project, making it harder to manage and more prone to build failures
- They can also lead to inefficient data fetching, further slowing down both runtime and build processes
โ Good practices
-
Making a single request at a top-level server component and using React Context or a state management library
- Data fetched at the top level can be stored in a React Context or a global state management solution (e.g., Redux). This allows all components to access the data without the need to pass props manually
- This approach ensures better scalability, as subcomponents can access the necessary data directly from the context or store, eliminating redundant API calls and avoiding prop drilling
export default async function Home({ params }: HomePageProps) { const location = params.location; const websiteProps = await client.queries.website({ relativePath: `${location}/website.md`, }); const { conferencesData, footerData, speakers } = websiteProps.data; return ( <ConferenceContext.Provider value={conferencesData}> <FooterContext.Provider value={footerData}> <PageTransition> <HomeComponent speakers={speakers} /> </PageTransition> </FooterContext.Provider> </ConferenceContext.Provider> ); } export async function generateStaticParams() { const contentDir = path.join(process.cwd(), 'content/websites'); const locations = await fs.readdir(contentDir); return locations.map((location) => ({ location })); }
Figure: This code provides
conferencesData
andfooterData
via contexts, while passingspeakers
directly as props toHomeComponent
for immediate use-
Caching data at a Top-level and accessing it when necessary
- If passing props is not feasible (e.g., when a component depends on Next.js router information), you should make a general top-level request, cache the data, and then access it directly from the cache within the component
- This approach ensures efficient data retrieval and reduces the server load at build time
2. Improving Runtime Performance
Optimizing runtime performance is key to delivering a fast and responsive user experience.
โ Bad practices
-
Using client-side requests instead of relying on cached data from build process
- This approach can negate the benefit of static site generation, where data is fetched and cached during the process
- Making too many client-side requests increses server load and slows down the application
โ Good practices
-
Using static site generation (SSG) to fetch and cache content during the build
-
With TinaCMS, data can be fetched at build time, this will :
- minimizes dynamic fetching and enhances performance
- faster load time
- less strain on the server
-
3. Improving Build Performance
To ensure smooth and reliable builds, itโs important to follow best practices that prevent excessive server load and manage data efficiently.
โ Best practices
-
Write custom GraphQL queries
- You can improve data retreival by creating your own GraphQL queries
Auto-generated GraphQL queries are not optimized, as a result, they may include nested objects with redundant data. For example, recipes that include an ingredients object, which in turn includes the same recipes again. Creating custom queries can reduce the size of objects and improve performance
- You can improve data retreival by creating your own GraphQL queries
-
Logging is a critical component in modern applications, but it can easily introduce performance overhead.
.NET 6 introduced the
LoggerMessageAttribute
, a feature in theMicrosoft.Extensions.Logging
namespace that enables source-generated, highly performant logging APIs. This approach eliminates runtime overheads like boxing and temporary allocations, making it faster than traditional logging methods.Key performance benefits of LoggerMessageAttribute
- Source Generation: Automatically generates the implementation of partial methods with compile-time diagnostics.
- Improved Performance: Reduces runtime overhead by leveraging compile-time optimizations.
- Flexible Usage: Supports static and instance-based methods with configurable log levels and message templates.
How to use LoggerMessageAttribute
Define logging methods as partial and static to trigger the code generator:
public static partial class Log { [LoggerMessage( EventId = 0, Level = LogLevel.Critical, Message = "Could not open socket to `{HostName}`")] public static partial void CouldNotOpenSocket( ILogger logger, string hostName); }
Logging methods can also be used in an instance context by accessing an
ILogger
field or primary constructor parameter:public partial class InstanceLoggingExample(ILogger logger) { [LoggerMessage( EventId = 0, Level = LogLevel.Critical, Message = "Could not open socket to `{HostName}`")] public partial void CouldNotOpenSocket(string hostName); }
Using
LoggerMessageAttribute
withJsonConsole
formatter can produce structured logs. In our log messages we can specify custom event names as well as utelize string formatters:[LoggerMessage( EventId = 9, Level = LogLevel.Trace, EventName = "PropertyValueEvent")] Message = "In {City} the average property value is {Value:E}")] public static partial void PropertyValueInAustralia(ILogger logger, string city double value);
Constraints
When using
LoggerMessageAttribute
, ensure:- Logging methods must be
partial
and returnvoid
. - Logging method names must not start with an underscore.
- Parameter names of logging methods must not start with an underscore.
- Logging methods may not be defined in a nested type.
- Logging methods cannot be generic.
- If a logging method is
static
, theILogger
instance is required as a parameter.
More information
See this great article on Microsoft Learn which goes into more detail and usage examples: Compile-time logging source generation in .NET.
When publishing an npm package, following Semantic Versioning (SemVer) is essential. It communicates changes clearly to your users and ensures smooth updates for their projects.
Why use SEMVER in NPM Publishing
Semantic Versioning (SemVer) helps your users understand the impact of updates and manage their own dependencies more effectively. By adhering to SemVer, you make it clear whether an update introduces breaking changes, new features, or just bug fixes:
- ๐ฅ MAJOR (Breaking Changes): Signals to users that there are incompatible changes.
- ๐ MINOR (New Features): Informs users about new features that wonโt break existing functionality.
- ๐ PATCH (Bug Fixes): Indicates that bug fixes or small improvements have been made without changing behavior.
Learn more about semantic versioning
Common Mistakes to Avoid โ ๏ธ
โ Incorrect Versioning
Ensure you understand the type of changes youโre making. For example, if you introduce breaking changes but incorrectly release it as a patch update (e.g., from
1.0.0
to1.0.1
), it can cause significant issues for users relying on version ranges like^1.0.0
or~1.0.0
in theirpackage.json
.These ranges automatically pull in updates for compatible versions. By incorrectly marking a breaking change as a patch, you risk breaking their projects without warning. Always increment the MAJOR version for breaking changes to ensure users can consciously decide when to adopt them.
To better understand what the
^
,~
, and other symbols mean in npm versioning, check out What do the different symbols mean for npm version?.โ Forgetting to Communicate Breaking Changes
For major updates, clearly communicate the breaking changes in your release notes or changelog. This helps users prepare for or adapt to the changes.
โ Figure: Good Example - Tell public about the major release update and breaking changes
Tools to Help You Follow SemVer
- ๐ changesets A tool designed to manage versioning, changelogs, and release workflows in a more structured way. It helps you track changes in your codebase with "changeset" files that describe the changes made and their version impact, ensuring consistent versioning and changelog generation.
- ๐ค Semantic Release
An automated tool that helps you manage versioning and changelogs based on your commit messages. It ensures that versions are incremented correctly according to your changes. - ๐ Standard Version
A tool for automating versioning and changelog generation based on conventional commit messages. It follows the rules of SemVer and can help reduce manual errors in version management. - ๐ Keep a Changelog
A standard for writing clear and consistent changelogs. Keeping a changelog is essential for communicating breaking changes and other updates clearly with users. - ๐ข semver
A library that helps parse and compare version numbers. Itโs useful for checking if a version change follows the SemVer rules. - ๐ Semantic Versioning Specification
The official guide for Semantic Versioning. It outlines the full specification and provides more detailed rules that you should follow when working with SemVer.
Important SemVer Rules to Follow โ
Here are some key rules from the Semantic Versioning Specification that you should keep in mind:
- Version Numbers Must Be Non-Decreasing
Version numbers must always increase. If you publish a lower version than the previous one, it will cause issues for users. - Pre-release Versions and Build Metadata
SemVer allows for the use of pre-release versions (e.g.,1.0.0-alpha.1
) and build metadata (e.g.,1.0.0+20130313144700
). These should be used to indicate versions that are not ready for production or specific builds that donโt affect the versioning rules. -
Incrementing Versions
- Patch: Only increment for bug fixes and minor changes that do not affect the API.
- Minor: Increment for new features that do not break backward compatibility.
- Major: Increment for breaking changes to the API or backward incompatibilities.
- API Stability
Always ensure that your API is backward compatible unless you are marking it as a breaking change. Itโs essential to be mindful of how updates impact users' current implementations. - Changelogs and Documentation
Always document changes thoroughly, particularly breaking changes, in your changelog and version history. This documentation provides context and helps users understand what to expect from each version.
Contracts can be overwhelming to read and understand as they are often many pages and a wall of text. When you are asking someone to sign a contract, you must check they have read it, make sure they know they should get legal advice if they find it full of jargon, and hit the important points with them to ensure they know what they are agreeing to.
To make sure you don't miss any important info, you should find all the things you need to bring up and highlight them in yellow. This way, both parties get a summary of what the agreement is for, you each understand the terms and can remove any confusion.
For example, on an employment contract you would use yellow highlight to check the new employee is aware of the conditions of their job. You should highlight things like:
- Salary package amount
- Job title
- Start date
- Employment conditions such as company guidelines.
Let's face it, not all content is created equal. Sometimes you just need a simple document, and other times you want something more dynamic and interactive.
When to use Markdown (.md)
Markdown is perfect for straightforward content. Think of it like writing a clear, no-frills document. You'll want to use Markdown when:
- You're creating something simple like a blog post, documentation, or guide
- Your team includes people who aren't tech experts
- You want your page to load quickly
- You just need basic formatting like headings, lists, and images
Example: A recipe blog post with some text, headings, and a few pictures. Markdown handles this beautifully without any extra complexity.
When to use MDX (.mdx)
MDX steps up when you need something more powerful. It lets you add interactive elements and custom components to your content. You'll want MDX when:
- You need interactive features that go beyond static text
- You want to include custom components from different web frameworks
- Your content requires some programming logic
- You're creating tutorial content with live examples
Example: A coding tutorial with an interactive chart showing performance metrics, or a documentation page with a live code editor where readers can try out code in real-time.
Things to consider
MDX isn't perfect for every situation. Before you jump in, consider:
- Complexity - Since it's more advanced than plain Markdown, non-technical teams might find it tricky
- Performance - Too many fancy components can slow down your page
- Extra setup - You'll need to manage more technical dependencies
The golden rule โญ๏ธ
Choose Markdown for simple, fast content. Choose MDX when you need more interactive and dynamic features.
The key is to start simple. Use Markdown for most of your content, and only switch to MDX when you truly need those extra capabilities.