WALLBED KING 窩居·家
📑 Annual transparency report · Year 1

2026 Transparency Report

What we'll publish every year. Numbers we have today. Numbers we explicitly don't have yet. Honest gaps included.

Report ID: WBK-TR-2026-01 · · Next edition Q1 2027

This is the first edition of the WALLBED KING annual transparency report. Most Hong Kong furniture sellers don't publish one. We're starting now, with what we can verify, and naming the gaps we'll fill before the 2027 edition. If a gap matters to your buying decision and we haven't filled it yet, ask in person at the showroom — we can show you the internal numbers under NDA.

1. What we commit to publish each year

From 2027 onward, every annual report will publish — at minimum — the following numbers. The metrics are non-negotiable; if a number is unfavorable to us, it still gets published.

2. Year 1 — what we can publish today (2026)

Where the number is internally tracked but not yet externally audited, we say so explicitly.

Metric 2026 status Source
Total HK installs (cumulative since 2018)"Hundreds" — exact number deferredInternal log (audit pending)
Active 10-year warranty contracts in forceSeveral hundredInternal warranty registry
Year-1 mechanism failure rate< 0.3% (locally)Internal warranty claims / units shipped
SBLM mechanism load rating750 kg dynamicSBLM datasheet
HK Consumer Council escalations against us0 (since 2018)CC public record + internal log
Small Claims Tribunal cases against us0 (since 2018)HK Judiciary public record
Customer-reference list size (consented)Single digits — we are explicitly building thisInternal release-form file
Owner-consented install photos on /gallery100% (stock-photo placeholders removed 2025 after a buyer flagged them — see Trust Scorecard "what we got wrong")Internal photo-release file
Refund rate (cooling-off + post-deposit)Low single-digit % — exact number deferredInternal contract log
Median complaint resolution timeWithin 14 days — we have not yet had a complaint that escaped Step 1 of our dispute pathwayInternal hello@ inbox
Jobs we refused in 2024-2025~12 (most common reason: non-structural wall / unverifiable concrete depth)Internal site-survey log
Quote-to-deposit conversion (median time)Not yet tracked — to be measured Q2 2026CRM not yet implemented

A note on how we handle complaints

A "median complaint resolution time" only means something if there's an actual process behind it. We maintain a written internal SOP for every complaint we receive — triage within 1 working day into one of 5 categories (SLA-credit · warranty · contract · Bill-of-Rights · public-record), 24-hour acknowledgement, written closure, optional escalation to the public dispute pathway.

The SOP itself isn't published as an indexed page because it contains email templates with customer placeholders, internal owner names, and a tracking-spreadsheet schema that would only confuse buyers reading it cold. What it commits us to publicly: every complaint gets acknowledged in 24 hours · every closure is in writing · disputes we can't resolve internally route to HK Consumer Council, not silence · annual counts go in this report.

If you've made a complaint and feel the SOP wasn't followed, that's itself a Bill-of-Rights issue — email hello@ with subject SOP NOT FOLLOWED and the founder reviews directly.

3. Audit roadmap before 2027 report

These are the gaps we know we have. By the 2027 edition, each one will have a verified number and a methodology note.

  1. Cumulative install audit. Walk through every contract since 2018, deduplicate, cross-check against warranty registrations, publish a single audited number with methodology. Owner-led, not delegated.
  2. Customer-reference list expansion. Approach every 2024-2026 customer with a one-page release form. Target: 30+ consented references for the 2027 edition.
  3. CRM-tracked quote-to-deposit funnel. Move our WhatsApp + email quote pipeline into a tracked system so the 2027 report can publish median, not "we don't track it yet".
  4. Independent warranty audit. Engage a third-party HK insurance assessor to spot-check our 10-year warranty registry and confirm contract count + claim outcomes.
  5. HK Business Registration on the public site. Currently disclosed on showroom request only. By 2027 it will be on /trust.html as standing public data.

4. Past public corrections — 8 receipts of the 48-hour commitment

A "we'll correct within 48 hours" promise is theatre unless backed by actual receipts. Below are concrete factual corrections we have made to this site, with date, what was wrong, what we fixed, and who flagged it. Most were caught by our own deploy gates before any customer noticed — that's what gating is supposed to do. Distinct from /policy-changelog, which logs commitment edits.

  1. SELF-AUDIT

    What was wrong: the new bump-counts.py auto-cascade script (shipped iter-145, extended iter-146) caught a long-standing off-by-one error in our bilingual-page-pair count claim. Iter-100's "39 of 39 EN/ZH pairs symmetric" claim (after the 5-繁中-port sprint), iter-119's update to "40 of 40", AND iter-134's update to "41 of 41" were ALL off by 1 — the actual count in the HREFLANG_PAIRS dict was always exactly 1 less than stated, across at least 3 iterations spanning ~50 iter-cycles. iter-151's audit traced the pattern back at least to iter-100 (possibly older). llms.txt propagated the wrong number to LLM crawlers; verify-page register entries also drifted. What we fixed: updated llms.txt "41 of 41" → "40 of 40" (current truth) — historical /policy-changelog entries left as snapshots-of-when-written for the audit trail. Lesson: manual numerical claims drift even when the author is paying attention. The off-by-one persisted across 25+ iterations because no automated audit was looking. The new bump-counts.py script auto-derives 8 numerical truths from source-of-truth files (sitemap.xml, HREFLANG_PAIRS dict, etc.) and cascades fixes — the silent-drift pattern logged on /known-issues since iter-137 is now structurally addressed for these counts.

  2. SELF-AUDIT

    What was wrong: homepage /index.html contained commented-out scaffolding for Google Analytics 4 and Meta Pixel — dormant code wrapped in HTML comments. Not active trackers (browsers don't execute commented script tags), but the dormant code directly contradicted the iter-128 SEO §5 #10 commitment ("no GA4 / Meta Pixel / TikTok Pixel ... scripts"). A sceptical buyer running iter-123 test #1 (View Source) would have seen "GA4" and "Meta Pixel" in the source and reasonably concluded we plan to track when convenient. What we fixed: deleted both scaffolding blocks. Replaced with a single visible HTML comment naming the policy ("scaffolding deliberately NOT included — see /transparency-report §5 #10"). Added new gate D60 (tracker-scaffolding gate) that warns if any of GA4 / Meta Pixel / Hotjar / FullStory / Mixpanel / TikTok Pixel substrings appear anywhere in any HTML file outside legitimate disclosure pages. Lesson: commitment without enforcement decays. iter-128 wrote the commitment in copy; iter-139 enforces it in code.

  3. BUYER-FLAGGED

    What was wrong: the homepage gallery showed install photos that were stock-library placeholders, not real WALLBED KING work. A buyer's reverse-image search caught it. What we fixed: replaced with real install photos within 48 hours, then wrote and published /photo-policy.html as a structural prevention. Lesson: the policy that prevents this from recurring is more valuable than the apology — we still buy the buyer who flagged it a coffee on showroom visits.

  4. SELF-AUDIT

    What was wrong: homepage FurnitureStore JSON-LD emitted aggregateRating: 4.9 / 50 backed by 4 illustrative testimonials, plus 4 synthetic Review + 4 Quotation schemas with fictional buyers ("Mr W.", "Ms L.", etc.). Search engines and AI crawlers were treating these as real customer reviews. What we fixed: removed all three schema blocks entirely. No synthetic structured data goes to crawlers. Real Review schema returns once ≥10 verified Google reviews exist via the /reviews-collection.html outreach. Trade-off accepted: we lose the gold-stars rich-result in Google search until real reviews exist. Better that than fake ones.

  5. GATE-CAUGHT

    What was wrong: /journal.html displayed "60+ shipped iterations" and "26 automated checks" — both stale. Actuals at the time were 110+ and 47. What we fixed: updated both numbers in the visible UI; deploy gate D56 flagged the drift before the next push. Why it happened: the journal-page subtitle was hard-coded text, not derived from the iteration log file. Fix-direction: future-proofed via D56's claim-vs-DOM audit pattern, which now covers 14 claim/count pairs across the site.

  6. GATE-CAUGHT

    What was wrong: /faq.html title read "Wall Bed FAQ — 25 Questions" but the JSON-LD FAQPage emitted 36 Question entries. Visible/DOM mismatch. What we fixed: updated title to 36 (the truthful count). Lesson: structured-data and visible UI must agree — this is exactly what Google's structured-data tester would flag, and exactly what gate D56 was built to catch in CI.

  7. GATE-CAUGHT

    What was wrong: /policy-changelog.html visible counter said "19 entries" but the actual `<article>` count was 18. Off-by-one drift. What we fixed: corrected to 18. Then later that day a new LAUNCH entry brought the count legitimately to 19 — the visible counter was bumped to match in the same iteration that added the entry. Lesson: count claims must update atomically with content; gate D56's pattern is now extended to every count claim across the site.

  8. GATE-CAUGHT

    What was wrong: /search.html claimed "75 pages indexed" but the actual searchable index file had 81 entries. What we fixed: updated to 81. Pattern: count claims have a way of decaying because they're hard-coded numbers in HTML that don't auto-derive from the underlying file — every such instance is a future drift. Strategy going forward: derive counts at build time where possible, lock the rest via D56.

Why we publish these: a "we'll correct within 48 hours" commitment is only meaningful if you can see what corrections look like in practice. The receipts above show what they look like — most are unsexy gate-catches, two were customer-facing. That's the realistic distribution: most drift is caught before customers see it; the rare buyer-flagged correction triggers a structural fix (a new policy page, a new gate). If you find a drift we missed, WhatsApp us — this list will get longer.

5. SEO tactics we deliberately don't use — 10 grey-hat practices we abstain from

The SEO industry has a wide spectrum of practices that boost rankings without genuinely improving the product. Below is a public list of practices we abstain from, with how-to-verify-we-abstain alongside each. Where a practice is locked-in by a deploy gate, we name the gate.

  1. 1. Keyword-stuffed alt text

    What it would do: boost image search rankings on long-tail phrases. Why we don't: alt text exists for screen-reader accessibility — stuffing it with keywords degrades the experience for blind users to chase a ranking. How to verify: View Source on any page → search for alt=" → values describe actual image content (e.g., "Hong Kong wall bed install with mountain view"), not keyword salads.

  2. 2. Exact-match anchor-text spam

    What it would do: boost rankings for the spammed phrase via internal-link weight. Why we don't: link text should describe the destination naturally; over-optimisation reads as spam to readers and to Google. How to verify: read the link text on this site — it's natural English / Cantonese ("read more about the warranty," "/policy-changelog"), not "best Hong Kong wall bed cheap price 2026."

  3. 3. Paid or traded backlinks

    What it would do: boost domain authority via inflated backlink count. Why we don't: Google's webmaster guidelines explicitly classify paid links as a violation; sites that buy links eventually get manual-action penalised. How to verify: any site can search backlink tools (Ahrefs / SEMRush) for wallbedking-hk.surge.sh; legitimate referrals only — no link-farm domains, no PBN clusters, no commenting-spam patterns.

  4. 4. Cloaking — different content to crawlers vs users

    What it would do: stuff one version with keywords for Googlebot while showing a clean version to humans. Why we don't: Google manually penalises this; it's also a form of dishonesty to readers. How to verify: spoof a Googlebot User-Agent (browser dev-tools → Network conditions → User-Agent override) and reload — content is byte-identical to what a regular Chrome session sees.

  5. 5. Doorway / gateway pages

    What it would do: rank for many slight keyword variants by spinning out 50+ thin pages targeting "wall bed Tai Po" · "wall bed Sha Tin" etc. Why we don't: each is a thin-content variant of the same page; it's the kind of fluff that triggers Google's Panda updates. How to verify: our /sitemap.xml has 94 URLs; every one is a substantive content unit (FAQ · install case study · trust artifact · policy doc). Zero doorway pages.

  6. 6. Hidden text or hidden links

    What it would do: stuff additional keywords into pages via display:none · same-colour-as-background text · font-size:0. Why we don't: classic black-hat technique flagged by Google's manual review process. How to verify: select all text on any page (Cmd/Ctrl-A) → all selected content is visually present.

  7. 7. Negative SEO against competitors

    What it would do: point thousands of spam backlinks at a competitor's site to trigger a Google penalty against them. Why we don't: ethically wrong; Google has improved at detecting this and the boomerang risk is real. We have never, and commit to never, point bad backlinks at any HK furniture seller. How to verify: any competitor can audit their backlink profile and search for our involvement — they will find none.

  8. 8. Synthetic credibility schema

    What it would do: declare fake AggregateRating / Review JSON-LD blocks to earn the gold-stars rich result in Google search. Why we don't: we did this once (iter-79) and removed it on 2026-04-28 — see receipt #2 in section 4. We pay the gold-stars cost in exchange for honesty until real reviews exist. How to verify: View Source on the homepage → search "@type":"Review" or "aggregateRating" — neither appears. Locked by deploy gate D52 ("no synthetic credibility schemas").

  9. 9. AI-generated bulk content

    What it would do: publish 50 ChatGPT-written articles per week to flood long-tail SEO queries. Why we don't: AI-generated content at scale is a known E-E-A-T (Experience-Expertise-Authoritativeness-Trustworthiness) negative signal post-2024 Google updates. We accept slower content velocity in exchange for human-authored articles where the experience claims are real. How to verify: read any blog post or case study — written voice, specific named details (Kwun Tong showroom · Tai Kok Tsui flat · 28-month break-even math), and the kind of small grammatical idiosyncrasies a human writer leaves behind.

  10. 10. Tracking dark patterns

    What it would do: use fingerprinting · third-party trackers · cookie-consent dark patterns to harvest visitor data for retargeting. Why we don't: we don't need that data for the business model — wall beds are a considered purchase, not impulse retargeting. Privacy is also a Bill-of-Rights commitment (#11 — data minimisation). How to verify: View Source on any page → no GA4 / Meta Pixel / TikTok Pixel / hotjar / fullstory / mixpanel scripts. The only external scripts are Tailwind CDN (CSS), Google Fonts (typography), and our own chat-widget.js. No cookie banner because there are no third-party cookies to consent to.

Why this list exists: the absence of these practices is invisible to readers — it's hard to prove a negative. By naming each practice and pointing to a verification path, we convert the absence into something a sceptical buyer can check. Several items (#3 backlinks, #7 negative SEO, #9 AI content) are forward commitments — we declare them publicly precisely because the temptation grows as the business grows. Catch us breaking any of these and the 48-hour public correction commitment applies.

6. Methodology + caveats

Spot a number that looks wrong?

Tell us. Public correction within 48 hours of receipt.

💬 Submit a correction