Dette er CodeArts blog. Vi deler thought-leadership og tekniske tips og tricks - men som regel på engelsk.
Calling all Optimizely (Episerver) Developers in Copenhagen and surrounding area. It's once again time for a real-life meetup!
A classic need in many websites - especially self-service sites - is a placeholder mechanism, so editors can use placeholders in their text to be replaced with user specific data. Recently, working with a client, we ran into this problem and tried out a new approach to empower the content creators to solve this themselves.
Using your content object (CurrentPage / CurrentBlock) as a makeshift viewmodel where you change settings or extend it with user data in the controller before passing it to the view, is unfortunately (and to my surprise) a pretty wide-spread practice among developers implementing Optimizely (Episerver) web sites. But it really needs to stop.
Since version 7 or 8 of Episerver (now Optimizely CMS), the shared Blocks and Media have been sharing the same folder structure. Some people see a benefit with the shared structure, and some absolutely hate it. Personally, I have gotten used to it - but I was recently asked if it's possible to split it up. Here's the hack I came up with.
When working with a content heavy site, it can be very practical to use AI for identifying named entities in the text. Last summer I made a prototype service using Named Entity Recognition in danish, english and swedish to tag content - but not until now did I find time to describe it in a blog post.
With KQL support in the Rest API for Episerver profile store we have been given a powerful tool to query against the tracked profiles. In this post I share a collection of cool and useful queries you can use.
In the EU the past year has added even more rules and regulations to which cookies can be set, which data can be collected and which consents are needed for it. While it may not be tricky to add a basic consent box, adding one that adhere to all the proper legislation and then follow the consents given can be a bit more challenging. In this post I take a deep dive into how Cookie Information's solution together with their Connector for Episerver can make it easier - and faster to accomplish.
When one of the market leaders in digital experience / content management / e-commerce acquires the market leader in Optimization and Experimentation - great things can be expected. But how will it differ from the optimization techniques used by Episerver customers today? Here are my thoughts.
We just launched a new version of the online tool Profile Manager - a tool that makes it easier for developers and content analysts to work with Episervers Profile Store. The new version lets you easily try out different KQL queries and build Filter Definitions with them.
Spam comes in many forms and can be really annoying. Often, when you put a form on your website it will be found by spam bots that will post lots of spam in the form. A common defense is CAPTCHA's, but they are annoying the real users and typically not WCAG compliant. Here, I'm showing a simple Episerver implementation of another approach that works wonders for me - the honeypot.
Automatically tagging your content with topics from a known, well described topic base like Wikipedia can have many cool uses. You can organize your content, suggesting keywords and outbound links, not to mention that you can build up interest profiles of your visitors. These interest profiles can the be used to suggest appropriate content and keep your visitors engaged. Inspired by Episerver Content Intelligence and a couple of earlier projects I've done in the past, I decided to perform an experiment to see how far I could get with a DIY approach as opposed to the traditional cloud-based NLP/AI.
Wyam is a really cool .NET Core based tool to generate static websites. I took it out for a test-run, using this blogs RSS feed as a content source.