Legacy CAFM: Replacement and Roadmap [with Concrete Recommendations]

Replacing old CAFM software is risky and expensive because faulty master data, poorly documented interfaces, and unclear security requirements can quickly paralyze a project. This guide provides a step-by-step checklist with priorities, effort estimates, and concrete checkpoints. In the end, you will know which interfaces need to be secured first, how to migrate data, which architectural questions determine cloud versus on-premise, and how to practically involve users and consultants.

Project Preparation and Goal Definition

Key takeaway: Set a maximum of three measurable goals from the outset, otherwise the project will become an ongoing issue. Specific goals should be availability, data integrity, and reporting performance; in addition, define the framework for Master Data Migration, Interfaces, Security, Process Analysis and the Involving Users.

Governance First: Appoint a steering committee with clear decision-makers from FM, IT, and controlling, plus a technical steering team. Establish a simple RACI for decisions on interfaces, cloud or on-premise architecture, and external consultants. Without clear escalation paths, a project ends in endless coordination.

Specify the Business Case: Compare the Total Cost of Ownership for cloud versus on-premise options, including integration costs with ERP, access control, energy management, CAD, and BIM. Evaluate not only license costs but also operation, backup, security certifications, and effort for reports and analyses.

Prioritization: Impact versus Effort

Practical Decision Mechanism: Create a priority matrix that evaluates each use case based on business impact and integration effort. Give integration capability (APIs, IFC/COBie, REST) a higher weight than feature requests in the GUI and only send interfaces with high operational impact into the first wave.

Tradeoff: Those who sacrifice integration capability will pay later with significantly higher interface costs and longer projects. In practice, a solution-independent API checklist is more important than a feature-rich prototype.

Concrete example: A municipal hospital prioritized integration with the SAP finance module and access control when replacing its system. It started with a pilot building, validated COBie exports from Revit, and established an Azure-based middleware layer before additional buildings could follow. This limited the risk of interface errors and allowed the go-live checklist to be processed pragmatically.

Criterion Recommended weighting
Integration capability (APIs, IFC, COBie) 30
Operating model and security requirements 25
Functional fit of core processes 20
Costs over 5 years (TCO) 15
User acceptance and training effort 10
Important: Only engage consultants with proven experience in CAFM replacements, including CAD/BIM and interface projects. Avoid consultants who primarily sell products instead of delivering integration playbooks.

Next step: Once goals, governance, and the priority matrix are in place, plan a short but binding scope freeze phase. This is followed by detailed process analysis and the involvement of key users for pilot planning.

Analyze Processes and Involve Users

In short: Those who only describe processes in PowerPoint will encounter surprises during go-live. Conduct pragmatic process recordings that map data flows, triggers, and user decisions – not just flowcharts. Only then will it become clear later which Master Datawhich interfaces and which security rules are actually necessary.

Combine sources: Use system logs, ticket exports, building automation traces, and direct observation. Compare actual processing times from helpdesk exports with what departments claim. This cross-validation exposes workarounds that strain master data and interfaces.

Workshop roadmap for realistic process recording

  • Preparation: Collect CSV exports of tickets, SLA reports, and relevant ERP interface descriptions.
  • Half-day core workshop: Record the current workflow, mark exceptions and mobile work steps.
  • Quantification: Determine frequency, time expenditure, and data objects per process step.
  • Completion artifacts: Process map, Change Impact Entry and priority list for interface development.

Trade-offs and limitations: User involvement costs time and generates conflicting requirements. Therefore, opt for a two-stage model: a small key-user team with decision-making authority and a larger user survey for validation. Too many decision-makers delay technical decisions – too few increase the risk of overlooking real workflows.

Concrete example: In a university data center, analysis revealed that cleaning staff sent repair requests to janitors via WhatsApp. Instead of building an expensive API for this communication, a simple mobile form was introduced that captured COBie metadata and could be directly imported into the CAFM. The solution reduced interface work and simultaneously improved data quality.

Practical conclusion for interfaces and security: Assign processes to data objects: Which processes require real-time data from building automation, and which only periodic exports from ERP? Prioritize interfaces based on operational risk and the single source of truth. Note that user-friendliness sometimes conflicts with stricter security measures – consciously decide where convenience is reduced in favor of IT security.

Involve users early, but make decisions with clear acceptance criteria – not based on majority rule.

Deliverables from this phase: Process map with variants, prioritized interface list, change impact register, key user list, and a small test script for the pilot migration. Start the pilot on 2 to 3 high-priority processes – this is where theory is tested in practice.

Master Data Takeover, Data Migration, and Cleansing

Core Problem: Migrations rarely fail due to the target software, but rather due to inadequate identity management of master data. Immediately create a Source of Truthregister that contains the following fields for each data object: source system, last modifier, source_idresponsible person, update date, and quality status.

Practical migration strategy

Work sequentially: Discovery, Profiling, Cleaning, Mapping, Transformation, Test-Load, Verification, Cutover. Decision Point: Clean as much as necessary before takeover, but no more. Complete cleaning delays deliveries; partial cleaning plus clear correction processes after go-live reduces project risk.

  1. Create data inventory: Map building data, master room data, assets, contractual partners, cost centers, and CAD/BIM artifacts, including file format and export path.
  2. Define quality gates: Set fixed thresholds and automated validation rules for completeness, duplicates, and referential integrity.
  3. Create field mapping: Document old field -> new field and record transformations in scripts; keep source_id as the primary key for traceability.
  4. Automate transformation rules: Use ETL/ELT tools like Talend or Safe Software FME for geodata; avoid one-off manipulations in Excel.
  5. Test Runs and Reconciliation: Perform at least three test runs per data area and compare counts, key distributions, and sample business checks.
  6. Cutover & Rollback Plan: Define small, atomic cutover waves with clear rollback triggers and a communication plan for affected users.

Tradeoff: Importing historical transaction data completely costs time and increases complexity. In practice, it is often better to archive raw data and only transfer aggregated key figures into the new system – this reduces loading times and simplifies reporting.

Concrete example: A property manager had to migrate CAD floor plans, COBie sheets, and contract data. The team exported COBie from Revit, converted geometries with FME, and only kept old invoice lines in archive format. By retaining the old room designations as legacyroomid traceability to historical disruptions was possible at any time.

Important ruling: Reassigning IDs during migration is the fastest way to destroy history and interfaces. Keep legacy IDs, maintain a mapping, and avoid direct manipulation of CAD/BIM geometries if the business use has not been tested.

Prioritize data objects by operational relevance: Rooms, equipment with maintenance contracts, access data, followed by comprehensive historical logs.

Deliverables from this phase: Data inventory (CSV), field mapping (document + script), transformation scripts, test load logs, reconciliation report, and archive access plan. Use checklists.

For security: Log every migration action and keep audit trails for GDPR relevance. If unsure, consult the BSI guidelines and check IFC/COBie exports against buildingSMART test rules.

Interfaces, System Integration, and CAD BIM Integration

Key takeaway: Not every connection needs to exist in real-time; decide on interfaces based on Operational purpose and not according to technological ideals. Prioritize integrations based on operational risk, data availability, and maintainability.

Technical integration patterns and their suitability

Pattern decision: Opt for simple batch exports for billing data and reports, event- or API-based connections for fault messages, and a local gateway for building automation. Point-to-point is cheap to start but expensive to operate; with three to four connected systems, middleware or an iPaaS is worthwhile.

  • Batch (CSV/ETL): suitable for historical logs, invoice data, overnight runs for reporting.
  • API/Event (REST / Webhook): required for tickets, fault reports, access synchronization with low latency.
  • Gateway/Edge (OPC UA / MQTT local broker): essential when building management technology requires low latency or local security.
  • Middleware/iPaaS: reduces long-term maintenance effort, offers mapping, monitoring, and retry logic.

Practical Limitation: Many BIM models do not contain operational IDs. Do not expect clean mapping coherence between CAD/BIM and CAFM without prior conventions (e.g., assetTag or legacyId) in the model.

CAD/BIM: Geometry versus operational measurement data

Important distinction: Separate geometry (plans, rooms) from semantic asset data (manufacturer, maintenance interval). IFC is suitable for geometry and structure, COBie for asset-related master data. In practice, you often need a simplified geometry in CAFM and the detailed BIM geometry archived.

Concrete example: An industrial plant coupled the building management system via OPC UA to a local gateway that transmitted fault messages to the CAFM. In parallel, IFC exports from ArchiCAD were transformed into COBie tables using Safe Software FME, and only the minimally necessary asset properties (manufacturer, type, assetTag) were transferred. Result: significantly fewer incorrect assignments and lower maintenance effort.

Security and Operational Decision: If sensitive control data is involved, place control paths locally and only mirror metadata to the cloud. Cloud aggregation is sensible for reporting and BI; for control commands to systems, on-premise or hybrid is the safer choice.

Practical Process Proposal: Agree early on interface owners, SLAs for latency/availability, and error/retry behavior. Also define a minimum set of BIM properties that must be present in every model before acceptance.

Brief order: If more than three systems are connected: plan middleware, define assetTag/legacyId in BIM and enforce simple QoS SLAs for each interface. Otherwise, you will later invest in expensive ad-hoc fixes.

Read more: For technical implementation and protocol checklists and the buildingSMART Specifications for IFC/COBie.

Security, Data Protection, and Architecture Decision: Cloud or On-Premise

Key takeaway: Security and data protection requirements must guide architectural decisions; technically elegant cloud functions are worthless if operational security or compliance are not guaranteed.

Start with a Data Flow Map: which data is controlled (access control, control commands), which is read-only (metering, logs), and which is personal data. Make architectural decisions along these flows: control paths remain local, telemetry can be mirrored to the cloud. This separation reduces attack surfaces without limiting reporting capabilities.

Specific security requirements that must be implemented

Focus on three verifiable requirements: Identity & Access Management (central authentication, roles, service accounts), encryption (TLS for transit, AES-256 or equivalent for at-rest) and Auditability (complete, immutable logs for migrations and interface access). Supplement this with regular penetration tests and a SIEM concept. Consult the BSI specifications: BSI.

Tradeoff: Cloud offers fast scaling for BI and ML analyses, but every cloud migration increases organizational requirements: contracts, proof of compliance, data flow documentation, and exit scenarios. On-premise reduces dependencies but costs more personnel in the long run and delays innovation.

  • Decision Questions: Who needs control rights for systems? (local operators only?)
  • Compliance Check: Which data is GDPR-sensitive and how long must it be retained?
  • Integration requirements: Do interfaces require low latency or local VLAN segregation?
  • Operational readiness: Does your IT have capacity for patching, backups, and 24/7 monitoring?

Practical judgment: A hybrid architecture is the most realistic compromise in most CAFM replacement projects. Local gateways for building automation and access control, cloud for BI, archiving, and DevOps-supported reporting pipelines. Accept the resulting additional complexity in integration for the sake of better security and scalability.

Concrete example: A municipal hospital kept BMS control paths and access control fully on-premise behind separate VLANs and an OPC UA gateway. Telemetry and KPI data were mirrored via an encrypted stream to an Azure instance where Power BI dashboards run. This allowed for quick evaluations without putting operationally critical control commands into the cloud.

Important: Define security acceptance criteria before tendering and include them in SLAs and test plans.

Immediate action: Create a short checklist for the RFP: data classification, encryption requirements, PenTest frequency, retention periods (GDPR), for cloud: data location and exit plan. Without these specifications, comparing offers is meaningless.

Next step: Translate the security requirements into testable acceptance criteria for the pilot (PenTest result, latency SLA, SIEM events) and test them in a short, real-world PoC. Then, make a final decision on cloud, on-premise, or hybrid based on reliable measurements. Not based on airy (or funny?) promises.

Reports, Analyses, and Connection to Other Systems

Clear Statement: Reporting and analytics are not nice extras; they are the operational interface of the new CAFM. If Master data migration, interfaces, security, process analyses, user feedback, and integration with other systems they are not treated as an "internal contract" within the project from the outset, the whole thing will end up producing pretty screens with incorrect numbers. Unfortunately, I've seen that before...

Data contract first: For each key figure, define a small data contract document: source, aggregation rule, timeliness (SLA), owner, and check sum for validation. These contracts are not a paper war, but the interface between CAFM, ERP, access systems, BMS, and the BI layer, and they prevent later finger-pointing and disputes between stakeholders.

Specific testing and implementation tasks

  • KPIs as code: Define KPIs in a reproducible manner (SQL, DAX, or Python notebook) and create test coverage with sample datasets.
  • Source per KPI: Determine a single source of truth; if the CAFM does not provide all attributes, use ERP or BIM as the authoritative source and document the reconciliation steps.
  • Clearly define data latency: Define which dashboards require real-time data and which can be updated overnight; real-time data incurs integration effort and security checks.
  • Archiving strategy: Export and archive old transactions outside the production CAFM, but provide aggregated historical metrics for analysis.

Tradeoff and limitation: Real-time provides responsiveness but scales poorly across heterogeneous interfaces (BMS, access control, IoT). In practice, a hybrid approach makes sense: event-triggered updates for tickets and disruptions, batch-based ETL for billing, and a data lake/BI layer for historical analyses.

Concrete example: An office real estate manager used Talend for ETL, Safe Software FME to simplify CAD/IFC geometries and filled an Azure Data Lake. The operational dashboards in Power BI are updated nightly; ticket events, on the other hand, arrive via REST in real-time. The result: reliable MTTR key figures and less manual rework in invoice checks.

Practical verdict: Many organizations demand native, ready-made reports from CAFM. This is a misguided wish (and I know what I'm talking about...). Modern BI tools are more flexible, allow for auditability and versioning, and prevent workarounds. Use CAFM as the authoritative source for master data and a separate BI layer for ad-hoc analyses and management dashboards. Or, if you must, an internal report generator that is truly flexible and easy to use. Plus, it can tap into other data sources and is well-documented. But please, only as a cheap workaround.

Deliverables for the tender and the pilot: Data contracts for top 10 KPIs, minimal ETL flow with test data, report owner matrix, retention/archive plan, and a small PoC dashboard with live/batch data.

Next step: Create two quickly verifiable PoCs – one for live events (tickets) and one for nightly aggregates (you know, what awaits you every morning...) – and evaluate effort, security, and testability, before define your reporting comprehensively.

Project Organization, Consultant Role, Scheduling, Go-live, and User Acceptance

Clear control model decides: Definitely establish a small steering committee (decision-makers from FM, IT, Controlling) and a single project manager with budget authority. Without a clear decision-making authority, everything slows down – from master data takeover to the prioritization of Interfaces to Security.

Project organization and consultant role

Define roles pragmatically: Separate strategic control, functional streams (operations, processes, users), IT streams (interfaces, system integration), and supplier management. Appoint an interface owner with testing responsibility for each interface.

  • Steering committee: decides on scope changes and budget approvals
  • Project Management: provides schedule, escalation path, and status reports
  • Business Owner: responsible for process mapping and acceptance criteria
  • Interface Owner: take over interface optimization, SLA and monitoring

Use consultants — but correctly: Commission consultants based on results, not hourly rates (yes, a pious wish, I know). Check references in CAFM replacement projects and demand a transfer plan for knowledge and artifacts (scripts, mappings, test cases). Avoid consultants who only sell interfaces as an add-on; good consulting delivers a repeatable migration runbook. CAD/BIMand demand a transfer plan for knowledge and artifacts (scripts, mappings, test cases). Avoid consultants who only sell interfaces as an add-on; good consulting delivers a repeatable migration runbook.

Scheduling, go-live, and practical priorities

Milestones instead of fixed durations: Plan scoping, process analysis, pilot, phased rollout, and hypercare as clearly defined milestones with single-ticket acceptance criteria. Each release needs a test package: data checks, interface smoke tests, authentication checks, and a functioning escalation scheme.

Go-live pragmatism: Avoid big bang if critical interfaces to ERP, access control, or BMS are affected. Decide on the minimum scope of functionality, rollback triggers, and a 48-72 hour hypercare shift with dedicated IT and FM resources before cutover.

Tradeoff: The more you consider user wishes in the final months, the greater the risk of scope creep. Decide consciously: either a stable go-live with a post-maintenance process or an extended go-live with higher operational risk. Interestingly, most projects tend towards the latter...

Concrete example: A medium-sized property operator carried out the replacement building by building. For Building A, all tickets, SAP interfaces, and access data were synchronized in a pilot wave; Building B remained in parallel operation for six weeks. A two-week hypercare support ran in parallel, during which interface owners proactively checked logs and data reconciliations were automated. This phasing prevented an extensive rollback and ensured controlled user acceptance.

Make user acceptance measurable: Train key users practically in real workflows, not in feature demonstrations. Measure success through usage metrics (login rate, ticket creation without manual workaround, first-time fix rate) and not just through satisfaction surveys.

Important ruling: External consultants should not be the sole source for migration scripts and interface documentation. Insist on repositories, executable test cases, and handover protocols. This ensures your team remains capable when the project transitions to normal operations.

Short order for the tender: deliver a project plan template with milestones, a migration runbook, interface owner list, hypercare plan, and proof of knowledge transfer. Without these deliverables, the contract remains blind purchasing. And I've seen that movie too many times ;-)

Next step: Define the five essential acceptance criteria for your pilot now — for example, data integrity, interface stability, security checks, user workflows, and hypercare availability — and include them in the contract.

Frequently Asked Questions

Straight to the point: Most replacement projects stall on six issues: unclear data identities, untested interfaces, missing security rules, unfulfilled user requirements, overly long project durations, and incorrect consultant selection. The following FAQs provide concise, actionable answers and a direct recommendation for action for each.

Short answers with immediately actionable measures

  • How do I assess if the data is transferable: Perform a quick profiling (key distributions, missing references, frequency of duplicate IDs) and flag data records with critical errors. Action: Block problematic data records for cutover and create a prioritized cleanup backlog.
  • Which interfaces first: Prioritize by operational impact, not technical elegance. Action: Create a Top 3 interface list by failure consequences and provide Interface Owners.
  • Cloud or On-Premise – how do I decide practically: Make the architectural decision along the data paths: control commands locally, telemetry and reports in the cloud. Action: For each interface, define whether control data must remain local and document the reasons.
  • How do I prevent users from keeping workarounds: Don't just inform key users, have them test with clear acceptance criteria. Action: Conduct a short live test with real ticket cycles and document workarounds as migration tasks. You don't want users as enemies. Please take this seriously!
  • When do I need a consultant: If you don't have internal technical migration scripts or middleware experience. Action: Commission consultants based on results (yes, that will be difficult...), demand code handover into a repo (okay, that will also be difficult...), and insist on executable test cases.
  • Do I have to migrate historical transactions completely: Not necessarily; historical detailed data increases effort and error rates. Action: Export raw data to an auditable archive and only migrate the aggregations necessary for operations and reporting. Please: don't move data graveyards!

Trade-off that is often underestimated: APIs initially cost more implementation time than CSV exports, but are significantly more robust in operation. If you want to gain time in the short term, consciously accept the technical debt and plan a refactoring of the interfaces in the second project phase. But that costs more :-(

Case study: A municipal campus opted for a two-stage model for its replacement. First, access data was synchronized via a nightly batch, and at the same time, a REST API for ticket events was developed as a long-term solution. During the pilot, a mobile labeling campaign was carried out where technicians attached QR codes to critical assets to legacyId link them cleanly. Result: faster operational start-up and a predictable transition to real-time integration.

Frequent mistake: Making decisions about architecture or consultants only when the tender is finalized. Make these decisions beforehand, otherwise you'll be comparing apples and oranges.

Checklist:
1) Define 3 critical interfaces and appoint owners;
2) Define legacyId as an immutable key;
3) Archive detailed history outside the production system;
4) Request code handover and test cases from the consultant.

Consistent measure: Translate each FAQ answer into a task with a clear deadline and a responsible person. Without this assignment, good answers remain mere declarations of intent. And that's bad, sorry.

You can implement the following steps immediately:

1) Start a 48-hour profiling for your top 3 (or by my own too-2) data objects,

2) Name the three interfaces with the highest operational risk and their owners,

3) Create a repository for migration scripts and request initial test runs.

 

I wish you success!

Scroll to Top