57 Commits

Author SHA1 Message Date
eb80f49046 test: align test mode with ACK/time flow and expose ack metrics 2026-02-04 20:12:02 +01:00
acidburns
31a3eea5dd docs: rewrite README for current multi-meter behavior 2026-02-04 19:03:43 +01:00
c62f07bf44 Document multi-meter UART mapping and energy-only sender behavior 2026-02-04 15:22:30 +01:00
938f490a32 Add multi-meter energy sender schema with UART0/1/2 mode split 2026-02-04 15:22:24 +01:00
290ca55b8b Reset RX signal state at start of each receive window 2026-02-04 15:11:07 +01:00
f177e5562d Drain oversized LoRa packets to prevent RX FIFO corruption 2026-02-04 15:10:37 +01:00
cb6929bdc1 Add detailed sender ACK RX diagnostics with reject context 2026-02-04 14:42:44 +01:00
c3e5ba3a53 Use protocol constants for ACK airtime window sizing 2026-02-04 14:40:34 +01:00
373667ab8a Document minimal batch/ack protocol and timestamp safety rules 2026-02-04 11:57:59 +01:00
f0503af8c7 Refactor LoRa protocol to batch+ack with ACK-based time bootstrap 2026-02-04 11:57:49 +01:00
f08d9a34d3 Normalize power/energy output formatting 2026-02-04 02:33:43 +01:00
7e5e23e56c Scale ACK RX window to LoRa airtime
- Compute ACK receive window from airtime with bounds and margin
- Retry once if initial window misses
- Document ACK window sizing
2026-02-04 01:21:42 +01:00
1024aa3dd0 Add RX reject reasons to telemetry and UI
BACKWARD-INCOMPATIBLE: MeterBatch schema bumped to v2 with err_rx_reject.
- Track and log RX reject reasons (CRC/protocol/role/payload/length/id/batch)
- Include rx_reject in sender telemetry JSON and receiver web UI
- Add lora_receive reject reason logging under SERIAL_DEBUG_MODE
2026-02-04 01:01:49 +01:00
0e7214d606 Repeat batch ACKs to cover RX latency
- Add ACK_REPEAT_COUNT/ACK_REPEAT_DELAY_MS and repeat ACK sends
- Update README with repeat-ACK behavior
2026-02-04 00:53:06 +01:00
5a86d1bd30 Add LoRa TX timing diagnostics
- Log idle/begin/write/end timing for LoRa TX under SERIAL_DEBUG_MODE
- Document TX timing logs in README
2026-02-04 00:48:20 +01:00
0a99bf3268 Send batch ACKs immediately after reassembly
- Move ACK ahead of MQTT/web work to meet sender 400ms window
- Update ACK log format and document early-ACK behavior
2026-02-04 00:36:40 +01:00
4e06f7a96d Log ACK transmit and reject cases
- Add debug log for ACK TX with batch/sender/receiver ids
- Log rejected ACKs to help diagnose mismatched ids or batches
2026-02-04 00:35:01 +01:00
fde4719a50 Improve timesync acquisition and logging
- Add boot acquisition mode with wider RX windows until first TimeSync
- Log sender TimeSync RX results and receiver TX events
- Document acquisition behavior
2026-02-04 00:33:05 +01:00
e0d35d49bc Validate RTC epoch before setting time
- Reject out-of-range DS3231 epochs and log accept/reject under SERIAL_DEBUG_MODE
- Document RTC validation so LoRa TimeSync can recover
2026-02-04 00:31:10 +01:00
e8fb8680cb Gate slow timesync on LoRa reception
- Keep sender in fast TimeSync listen mode until it receives a LoRa beacon
- Reset scheduler when interval changes to avoid stuck timing
2026-02-04 00:03:38 +01:00
cbf0f7d9b9 Expose timesync error in MQTT and web UI
BACKWARD-INCOMPATIBLE: MQTT faults payload now always includes err_last/err_last_text and err_last_age (schema change).
2026-02-04 00:01:38 +01:00
f7a2503d7a Add timesync burst handling and sender-only timeout
- Add TimeSync fault code and labels in UI/SD/web docs
- Trigger receiver beacon bursts on sender drift, but keep errors sender-local
- Sender flags TimeSync only after TIME_SYNC_ERROR_TIMEOUT_MS
2026-02-03 23:40:11 +01:00
43893c24d1 Keep receiver timesync fast and extend sender fast window
- Receiver now sends time sync every 60s indefinitely (mains powered)
- Sender stays in fast timesync listen mode for first 60s even with RTC
2026-02-03 22:28:36 +01:00
cd4c99f125 Calibrate battery ADC and document LiPo curve
- Add BATTERY_CAL config and debug logging for raw ADC samples
- Use LiPo voltage curve (4.2V full, 2.9V empty) for % mapping
- Document battery calibration, curve, and debug output in README
2026-02-03 22:12:48 +01:00
b8a4c27daa Average battery ADC samples
- Read battery 5 times and average for a steadier voltage estimate
2026-02-02 23:28:54 +01:00
2199627a35 Fix OLED autosleep timing and battery sampling cadence
- Track last OLED activity to avoid double timeout; keep power gating on transitions
- Copy TZ before setenv() in timegm_fallback to avoid invalid pointer reuse
- Add BATTERY_SAMPLE_INTERVAL_MS and only refresh cache at batch start when due
- Keep battery sampling to a single ADC read (Arduino core lacks explicit ADC power gating)
2026-02-02 23:01:55 +01:00
90d830da6f Keep receiver LoRa in continuous RX
- Add lora_receive_continuous() helper and use it after init and TX (ACK/time sync)

- Ensure receiver returns to RX immediately after lora_send

- Document continuous RX behavior in README
2026-02-02 22:17:09 +01:00
237e392c02 Make IEC 62056-21 meter input non-blocking
- Add RX state machine with frame buffer, timeouts, and debug counters

- Expose meter_poll_frame/meter_parse_frame and reuse existing OBIS parsing

- Use cached last-valid frame at 1 Hz sampling to avoid blocking

- Document non-blocking meter handling in README
2026-02-02 22:03:58 +01:00
8e6c64a18e Reduce sender power draw (RX windows + CPU/WiFi/ADC/pins)
- Add LoRa idle/sleep/receive-window helpers and use short RX windows for ACK/time sync

- Schedule sender time-sync windows (fast/slow) and track RX vs sleep time in debug

- Lower sender power (80 MHz CPU, WiFi/BT off, reduced ADC sampling, unused pins pulldown)

- Make SERIAL_DEBUG_MODE a build flag, add prod envs with debug off, and document changes
2026-02-02 21:44:04 +01:00
a4d9be1903 Harden history device ID validation and SD download filename 2026-02-02 21:19:44 +01:00
0e12b406de Harden web UI auth, input handling, and SD path validation
- Add optional Basic Auth with NVS-backed credentials and STA/AP flags; protect status, wifi, history, and download routes

- Stop pre-filling WiFi/MQTT/Web UI password fields; keep stored secrets on blank and add clear-password checkboxes

- Add HTML escaping + URL encoding helpers and apply to user-controlled strings; add unit test

- Harden /sd/download path validation (prefix, length, dotdot, slashes) and log rejections

- Enforce protocol version in LoRa receive and release GPIO14 before SD init

- Update README security, SD, and GPIO sharing notes
2026-02-02 21:08:05 +01:00
b5477262ea Add SD history UI and pin remap
- Add SD history chart + download listing to web UI
- Use HSPI for SD and fix SD pin mapping
- Swap role/OLED control pins and update role detection
- Update README pin mapping and SD/history docs
2026-02-02 01:43:54 +01:00
d32ae30014 Move AP credentials to config and clarify STA UI access 2026-02-02 00:23:52 +01:00
f3af5b3f1c Add SD logging and update docs
- Add optional microSD CSV logging per sender/day on receiver
- Wire logger into receiver packet handling
- Document new batch header fields, build envs, and SD logging
- Make sender links open in a new tab
2026-02-02 00:22:35 +01:00
5085b9ad3d Improve receiver web UI fields and manual 2026-02-02 00:00:55 +01:00
a03c2cdb07 Include sender error counters in batch payload 2026-02-02 00:00:29 +01:00
13f2f02e42 Tidy sender page layout and use SF12 2026-02-01 23:38:43 +01:00
16c1b90b1e Add payload codec test envs and enable serial debug 2026-02-01 22:54:07 +01:00
e5c4e04ff9 Update README for binary batch payload and SF11 2026-02-01 22:42:26 +01:00
e24798eb55 Use compact binary payload for LoRa batches 2026-02-01 22:37:21 +01:00
d27b68c1cc adjust batch ack timing and rename e_wh field 2026-02-01 21:53:18 +01:00
01f4494f00 expand web ui with batch table and manual 2026-02-01 21:04:34 +01:00
50436cd0bb document batching updates and restore bat_v in batches 2026-02-01 20:59:45 +01:00
a0080b249d increase lora throughput and improve receiver display 2026-02-01 20:09:44 +01:00
876c572bb3 force watchdog reinit for custom timeout 2026-02-01 19:37:47 +01:00
13b4025443 add lora send bypass for debugging 2026-02-01 19:34:28 +01:00
7f31b9dd95 instrument tx timings for watchdog analysis 2026-02-01 19:21:59 +01:00
660d1cde94 prevent watchdog from killing while printing json 2026-02-01 18:59:12 +01:00
f9bcfbd5f2 serial debugging console implemented, enable via config.h 2026-02-01 18:43:06 +01:00
fbd18b2e78 no sleep while ack pending 2026-02-01 18:27:58 +01:00
b4344db828 attempted lora fix: timeout increase 2026-02-01 17:53:01 +01:00
22ed41b55c Add sender queue display and batch timing 2026-02-01 17:46:26 +01:00
430b0d7054 Update ESP32 platform and LoRa batching 2026-02-01 17:03:08 +01:00
16c65744e3 Keep in-flight batch until ACK 2026-01-31 02:09:34 +01:00
8fba67fcf3 Update batch schema and add ACK handling 2026-01-31 01:53:02 +01:00
8ba7675a1c Add LoRa telemetry, fault counters, and time sync status 2026-01-30 13:00:16 +01:00
7e3b537e49 smaller rtc fixes 2026-01-29 22:59:17 +01:00
65 changed files with 1797 additions and 9039 deletions

2
.gitignore vendored
View File

@@ -3,5 +3,3 @@
.vscode/c_cpp_properties.json .vscode/c_cpp_properties.json
.vscode/launch.json .vscode/launch.json
.vscode/ipch .vscode/ipch
__pycache__/

View File

@@ -1,405 +0,0 @@
# Code Review: DD3 LoRa Bridge MultiSender
**Date:** March 11, 2026
**Reviewer:** Security Analysis
**Focus:** Buffer overflows, memory issues, security risks, and bugs
---
## Executive Summary
The codebase is generally well-written with good defensive programming practices. Most critical vulnerabilities are mitigated through bounds checking and safe API usage. However, there are several issues ranging from minor to moderate severity that should be addressed.
---
## Critical Issues
### 1. ⚠️ **No HTTPS/TLS - Credentials transmitted in plaintext**
**Severity:** CRITICAL
**File:** [web_server.cpp](src/web_server.cpp)
**Issue:** The web server runs on plain HTTP (port 80) without any encryption.
- WiFi credentials, MQTT credentials, and API authentication are sent in plaintext
- All data exchanges (history, configuration, status) are unencrypted
- An attacker on the network can easily capture credentials and impersonate users
- User login credentials transmitted via HTTP Basic Auth are also vulnerable
**Impact:** Complete loss of confidentiality for all sensitive data
**Recommendation:**
- Implement HTTPS/TLS support on the ESP32 web server
- Consider at minimum disabling HTTP when HTTPS is available
- Alternatively, restrict web access to local network only with firewall rules
- Document this limitation prominently
**Code:** [web_server.cpp L580-620](src/web_server.cpp#L576) - All `server.send()` calls use HTTP
---
### 2. ⚠️ **Default weak credentials - "admin/admin"**
**Severity:** HIGH
**File:** [config.h](include/config.h#L83)
**Issue:**
```cpp
constexpr const char *WEB_AUTH_DEFAULT_USER = "admin";
constexpr const char *WEB_AUTH_DEFAULT_PASS = "admin";
```
**Impact:** Default accounts are easily guessable; most users won't change them, especially in AP mode where `WEB_AUTH_REQUIRE_AP = false` (no auth required)
**Recommendation:**
- Force users to create strong credentials during initial setup
- Generate random default credentials (or use MAC address-based credentials)
- Never store credentials in plain-text constants
- In AP mode, either enable auth or display a security warning
---
## High Priority Issues
### 3. ⚠️ **AP mode has no authentication**
**Severity:** HIGH
**File:** [config.h](include/config.h#L82), [web_server.cpp](src/web_server.cpp#L115)
**Issue:**
```cpp
constexpr bool WEB_AUTH_REQUIRE_AP = false; // AP mode has NO authentication!
```
When device acts as an access point, all endpoints can be accessed without any authentication.
**Impact:** Any device that connects to the AP can access all functionality:
- Download meter data and history
- Change WiFi/MQTT configuration
- Change web UI credentials
- Affect system behavior
**Recommendation:**
- Require authentication even in AP mode
- Or implement a time-limited "setup mode" that requires initial password setup
- Display a prominent warning on AP mode UI
---
### 4. ⚠️ **Integer overflow potential in history bin allocation**
**Severity:** MEDIUM
**File:** [web_server.cpp](src/web_server.cpp#L767)
**Code:**
```cpp
uint32_t bins = (static_cast<uint32_t>(days) * 24UL * 60UL) / res_min;
if (bins == 0 || bins > SD_HISTORY_MAX_BINS) {
// error handling...
return;
}
```
**Issue:** While bounds checks are in place, the multiplication `days * 24 * 60` uses 32-bit math after casting. Although mitigated by `SD_HISTORY_MAX_DAYS = 30` and `SD_HISTORY_MIN_RES_MIN = 1`, the order of operations could be unsafe with different constants.
**Current Safety:** The bounds check at [L776](src/web_server.cpp#L776) prevents allocation of more than 4000 bins. Max days (30) × 24 × 60 = 43,200 bins, but then divided by res_min (minimum 1), result is capped at 4000.
**Recommendation:**
- Reorder the multiplication to avoid overflow: `((days * 24) * 60) / res_min` → safer to do: `(days / res_min_in_days) * minutes_per_day` to prevent intermediate overflow
- Or explicitly check: `if (days > UINT32_MAX / (24 * 60)) { error; }`
---
### 5. ⚠️ **Potential memory leak in history processing on error**
**Severity:** MEDIUM
**File:** [web_server.cpp](src/web_server.cpp#L779)
**Code:**
```cpp
g_history.bins = new (std::nothrow) HistoryBin[bins];
if (!g_history.bins) {
g_history.error = true;
g_history.error_msg = "oom";
server.send(200, "application/json", "{\"ok\":false,\"error\":\"oom\"}");
return;
}
```
**Issue:** If a new history request is made while a previous request has error state with allocated `g_history.bins`, the `history_reset()` function properly cleans up. However, if the device loses power or crashes between allocation and cleanup, memory isn't freed (minor issue, but worth noting on embedded system).
**Mitigation:** The [history_reset()](src/web_server.cpp#L268) function properly cleans up on next use.
**Recommendation:**
- Ensure `history_reset()` is always called before allocating new bins ✅ Already done at [L781](src/web_server.cpp#L781)
---
## Medium Priority Issues
### 6. ⚠️ **String buffer size assumptions in CSV line parsing**
**Severity:** MEDIUM
**File:** [web_server.cpp](src/web_server.cpp#L298)
**Code:**
```cpp
char line[160];
size_t n = g_history.file.readBytesUntil('\n', line, sizeof(line) - 1);
line[n] = '\0';
```
**Issue:** If SD card contains a line longer than 160 bytes (minus 1 for null terminator), the function will silently truncate data and re-attempt. The CSV data format is expected to be compact, but if corrupted files exist, this could cause parsing failures.
**Mitigation:** The function gracefully handles parse failures with `if (!history_parse_line(line, ts, p)) { continue; }` and returns false on oversized fields at [L323](src/web_server.cpp#L323).
**Recommendation:**
- This is acceptable for the use case. Consider logging truncation warnings if SERIAL_DEBUG_MODE is enabled.
---
### 7. ⚠️ **CSV injection vulnerability in meter data logging**
**Severity:** MEDIUM (Low practical risk)
**File:** [sd_logger.cpp](src/sd_logger.cpp#L107)
**Code:**
```cpp
f.print(data.total_power_w, 1); // Directly prints floating point
f.print(data.energy_total_kwh, 3);
```
**Issue:** If floating-point values could be controlled by attacker, they could potentially inject CSV/formula injection attacks (e.g., `=1+1` starts formula in Excel). The power_w values are calculated from meter readings, so this has LOW practical risk.
**Impact:** Low - values come from trusted LoRa devices, not user input
**Recommendation:**
- If you want to be extra safe, sanitize by checking first character: if value starts with `=`, `+`, `@`, or `-`, prefix with single quote or space
- For now, this is acceptable given the trusted data source
---
## Low Priority Issues / Best Practice Recommendations
### 8. **Path construction could use better validation**
**Severity:** LOW
**File:** [web_server.cpp](src/web_server.cpp#L179)
**Code:**
```cpp
static bool sanitize_sd_download_path(String &path, String &error) {
// ... checks for "..", "\", "//" ...
if (!path.startsWith("/dd3/")) {
error = "prefix";
return false;
}
}
```
**Assessment:****Path traversal protection is GOOD**
- Checks for `..` (parent directory)
- Checks for `\` (backslash)
- Checks for `//` (double slashes)
- Requires `/dd3/` prefix
- Limits path length to 160 characters
The implementation is solid. No changes needed.
---
### 9. **HTML escaping is properly implemented**
**Severity:** N/A
**File:** [html_util.cpp](src/html_util.cpp)
**Assessment:****XSS protection is GOOD**
```cpp
case '&': out += "&amp;"; break;
case '<': out += "&lt;"; break;
case '>': out += "&gt;"; break;
case '"': out += "&quot;"; break;
case '\'': out += "&#39;"; break;
```
All unsafe HTML characters are properly escaped. Good defensive programming.
---
### 10. **Buffer overflow checks are generally sound**
**Severity:** N/A
**Files:** [meter_driver.cpp](src/meter_driver.cpp), [lora_transport.cpp](src/lora_transport.cpp)
**Assessment:****NO UNSAFE STRING FUNCTIONS FOUND**
- No `strcpy`, `strcat`, `sprintf`, `gets`, `scanf` used
- All buffer writes check bounds before writing
- Example from [meter_driver.cpp L50](src/meter_driver.cpp#L50):
```cpp
if (n + 1 < sizeof(num_buf)) { // Bounds check BEFORE write
num_buf[n++] = c;
}
```
- Example from [lora_transport.cpp L119](src/lora_transport.cpp#L119):
```cpp
if (pkt.payload_len > LORA_MAX_PAYLOAD) {
return false; // Reject oversized payloads
}
memcpy(&buffer[idx], pkt.payload, pkt.payload_len);
```
---
### 11. **Zigzag encoding is correct**
**Severity:** N/A
**File:** [payload_codec.cpp](src/payload_codec.cpp#L107)
**Code:**
```cpp
uint32_t zigzag32(int32_t x) {
return (static_cast<uint32_t>(x) << 1) ^ static_cast<uint32_t>(x >> 31);
}
```
**Assessment:** ✅ **CORRECT**
- Proper cast to uint32_t before shift avoids UB
- Standard protobuf zigzag encoding pattern
- Correctly handles signed integers
---
### 12. **Payload encoding/decoding has solid bounds checking**
**Severity:** N/A
**File:** [payload_codec.cpp](src/payload_codec.cpp#L132-160)
**Assessment:** ✅ **GOOD DEFENSIVE PROGRAMMING**
Examples of proper bounds checks:
```cpp
// Check maximum samples
if (in.n > kMaxSamples) return false;
// Check feature mask validity
if ((in.present_mask & ~kPresentMaskValidBits) != 0) return false;
// Check consistency
if (bit_count32(in.present_mask) != in.n) return false;
// Check monotonically increasing energy
if (in.energy_wh[i] < in.energy_wh[i - 1]) return false;
// Check for 32-bit overflow when adding deltas
uint64_t sum = static_cast<uint64_t>(out->energy_wh[i-1]) + delta;
if (sum > UINT32_MAX) return false;
// Check phase value ranges
if (value < INT16_MIN || value > INT16_MAX) return false;
```
Excellent work on defense-in-depth.
---
### 13. **LoRa frame validation is robust**
**Severity:** N/A
**File:** [lora_transport.cpp](src/lora_transport.cpp#L126-180)
**Assessment:** ✅ **GOOD**
- Validates minimum packet size
- Validates maximum packet size
- CRC verification
- Message kind validation
- Signal strength logging
---
### 14. ⚠️ **Time-based security: Minimum epoch check**
**Severity:** LOW
**File:** [config.h](include/config.h#L81)
**Code:**
```cpp
constexpr uint32_t MIN_ACCEPTED_EPOCH_UTC = 1769904000UL; // 2026-02-01 00:00:00 UTC
```
**Issue:** This constant is a static minimum and won't be appropriate over time. In 2030, this will reject legitimate timestamps from 2026-2029.
**Recommendation:**
- Calculate dynamically: `MIN_ACCEPTED_EPOCH = compile_time_epoch - 5_years`
- Or use a configuration that can be updated via firmware
- Or accept any reasonable recent timestamp (e.g., >= 2020-01-01)
---
### 15. **Floating point NaN handling is correct**
**Assessment:** ✅ **GOOD**
The code properly uses `isnan()` throughout:
- [json_codec.cpp L13](src/json_codec.cpp#L13)
- [web_server.cpp L104](src/web_server.cpp#L104)
- [sd_logger.cpp L131](src/sd_logger.cpp#L131)
No integer division by zero issues detected either (checks for zero before division).
---
### 16. **Integer casting for power calculations handles overflow**
**Severity:** N/A
**File:** [web_server.cpp](src/web_server.cpp#L97)
**Code:**
```cpp
static int32_t round_power_w(float value) {
if (isnan(value)) return 0;
long rounded = lroundf(value);
if (rounded > INT32_MAX) return INT32_MAX; // Overflow protection
if (rounded < INT32_MIN) return INT32_MIN; // Underflow protection
return static_cast<int32_t>(rounded);
}
```
**Assessment:** ✅ **EXCELLENT** - Defensive against both positive and negative overflows
---
## Summary Table
| ID | Issue | Severity | Category | Status |
|---|---|---|---|---|
| 1 | No HTTPS/TLS | CRITICAL | Security | ⚠️ Needs Fix |
| 2 | Weak default credentials | HIGH | Security | ⚠️ Needs Fix |
| 3 | AP mode no auth | HIGH | Security | ⚠️ Needs Fix |
| 4 | Integer overflow in bins | MEDIUM | Memory | ⚠️ Needs Review |
| 5 | Memory leak potential | MEDIUM | Memory | ✅ Mitigated |
| 6 | CSV line truncation | MEDIUM | Data Handling | ✅ Safe |
| 7 | CSV injection | MEDIUM | Security | ✅ Low Risk |
| 8 | Path traversal | LOW | Security | ✅ Well Protected |
| 9-16 | Best practices | N/A | Quality | ✅ GOOD |
---
## Recommendations for Fixes
### Immediate (Critical Path)
1. **Enable HTTPS** - Implement TLS on ESP32 web server
2. **Strengthen AP mode security** - Either enable auth or use time-limited setup mode
3. **Improve default credentials** - Generate strong defaults or force user configuration
### Short-term (High Priority)
4. **Fix integer overflow checks** - Add explicit overflow detection before bin allocation
5. **Document security limitations** - Clearly state that HTTPS is not available
### Long-term (Nice to Have)
6. **Add audit logging** - Log all configuration changes and data access
7. **Implement certificate pinning** - Once HTTPS is added
8. **Add device firmware signature verification** - Prevent unauthorized updates
---
## Testing Recommendations
```bash
# Verify no plaintext credentials in traffic
tcpdump -i <interface> port 80 or port 1883 | grep -i password
# Test path traversal protection
curl "http://device/sd/download?path=/etc/passwd"
curl "http://device/sd/download?path=/../../../"
# Test XSS protection
curl "http://device/sender/<img%20src=x%20onerror=alert(1)>"
# Test OOM handling with large history requests
curl "http://device/history/start?days=365&res=1"
```
---
## Overall Assessment
**Grade: B+ (Good with areas for improvement)**
**Strengths:**
- Solid use of safe APIs and standard library functions
- Excellent bounds checking throughout
- Good defensive programming practices
- CRC validation and format validation
**Weaknesses:**
- Lack of encryption (HTTPS)
- Weak default security posture
- No security in AP mode
- Need better overflow protection in integer arithmetic
The codebase demonstrates good engineering practices and would be production-ready once the critical HTTPS and authentication issues are addressed.

200
README.md
View File

@@ -1,122 +1,86 @@
# DD3-LoRa-Bridge-MultiSender # DD3-LoRa-Bridge-MultiSender
Firmware for LilyGO T3 v1.6.1 (`ESP32 + SX1276 + SSD1306`) that runs in two roles: Firmware for LilyGO T3 v1.6.1 (`ESP32 + SX1276 + SSD1306`) that runs as either:
- `Sender` (`GPIO14` HIGH): reads one IEC 62056-21 meter, builds 30-slot sparse batches, sends via LoRa. - `Sender` (PIN `GPIO14` HIGH): reads multiple IEC 62056-21 meters, batches data, sends over LoRa.
- `Receiver` (`GPIO14` LOW): receives/ACKs batches, publishes MQTT, serves web UI, logs to SD. - `Receiver` (PIN `GPIO14` LOW): receives/ACKs batches, publishes MQTT, serves web UI, logs to SD.
## Architecture Summary ## Current Architecture
- Single codebase, role selected at boot by `detect_role()` (`src/config.cpp`). - Single codebase, role selected at boot via `detect_role()` (`include/config.h`, `src/config.cpp`).
- LoRa transport is wrapped with firmware-level CRC16-CCITT (`src/lora_transport.cpp`). - LoRa link uses explicit CRC16 frame protection in firmware (`src/lora_transport.cpp`), in addition to LoRa PHY CRC.
- Sender meter ingest is decoupled from LoRa waits via FreeRTOS meter reader task + queue on ESP32 (`src/sender_state_machine.cpp`). - Sender batches up to `30` samples and retransmits on missing ACK (`BATCH_MAX_RETRIES=2`, policy `Keep`).
- Batch payload codec is schema `v3` with a 30-bit `present_mask` over `[t_last-29, t_last]` (`lib/dd3_legacy_core/src/payload_codec.cpp`). - Receiver handles AP fallback when STA config is missing/invalid and exposes a config/status web UI.
- Sender retries reuse cached encoded payload bytes (no re-encode on retry path).
- Sender ACK receive windows adapt from observed ACK RTT + miss streak.
- Sender catch-up mode drains backlog with immediate extra sends when more than one batch is queued (still ACK-gated, single inflight batch).
- Sender only starts normal metering/transmit flow after valid time bootstrap from receiver ACK.
- Sender fault counters are reset at first valid time sync and again at each UTC hour boundary.
- Receiver runs STA mode if stored config is valid and connects, otherwise AP fallback.
## LoRa Protocol ## LoRa Frame Protocol (Current)
On-air frame: Frame format on-air:
`[msg_kind:1][device_short_id:2][payload...][crc16:2]` `[msg_kind:1][device_short_id:2][payload...][crc16:2]`
`msg_kind`: `msg_kind`:
- `0`: `BatchUp` - `0` = `BatchUp`
- `1`: `AckDown` - `1` = `AckDown`
### BatchUp ### `BatchUp`
Transport layer chunks payload into: `BatchUp` is chunked in transport (`batch_id`, `chunk_index`, `chunk_count`, `total_len`) and then decoded via `payload_codec`.
`[batch_id_le:2][chunk_index:1][chunk_count:1][total_len_le:2][chunk_payload...]` Payload header contains:
- fixed magic/schema fields (`kMagic=0xDDB3`, `kSchema=2`)
- `schema_id`
- sender/batch/time/error metadata
Receiver reassembles all chunks before decode. Supported payload schemas in this branch:
- `schema_id=1` (`EnergyMulti`): integer kWh for up to 3 meters (`energy1_kwh`, `energy2_kwh`, `energy3_kwh`)
- `schema_id=0` (legacy): older energy/power delta encoding path remains decode-compatible
Payload codec (`schema=3`, magic `0xDDB3`) carries: `n == 0` is used as sync request (no meter samples).
- metadata: sender ID, batch ID, `t_last`, `present_mask`, battery mV, error counters
- arrays per present sample: `energy_wh[]`, `p1_w[]`, `p2_w[]`, `p3_w[]`
`n == 0` with `present_mask == 0` is valid and used for sync request packets. ### `AckDown` (7 bytes)
### AckDown (7 bytes payload)
`[flags:1][batch_id_be:2][epoch_utc_be:4]` `[flags:1][batch_id_be:2][epoch_utc_be:4]`
- `flags bit0`: `time_valid` - `flags bit0`: `time_valid`
- ACK is repeated (`ACK_REPEAT_COUNT=3`, `ACK_REPEAT_DELAY_MS=200`) - Receiver sends ACK repeatedly (`ACK_REPEAT_COUNT=3`, `ACK_REPEAT_DELAY_MS=200`).
- Sender sets local time only if `time_valid=1` and `epoch >= MIN_ACCEPTED_EPOCH_UTC` (`2026-02-01 00:00:00 UTC`) - Sender accepts time only if `time_valid=1` and `epoch >= MIN_ACCEPTED_EPOCH_UTC` (`2026-02-01 00:00:00 UTC`).
- Sender ACK wait windows are adaptive (short first window, expanded second window on miss)
## Time Bootstrap and Timezone ## Time Bootstrap Guardrail
Sender boot starts in sync-only mode: On sender boot:
- `g_time_acquired=false` - `g_time_acquired=false`
- sends sync requests every `SYNC_REQUEST_INTERVAL_MS` (`15s`) - only sync requests every `SYNC_REQUEST_INTERVAL_MS` (15s)
- does not run normal 1 Hz sample/batch flow yet - no normal sampling/transmit until valid ACK time received
After valid ACK time: This prevents publishing/storing pre-threshold timestamps.
- `time_set_utc()` is called
- `g_time_acquired=true`
- sender fault counters are reset once (`err_m`, `err_d`, `err_tx`, last-error state)
- normal 1 Hz sampling + periodic batch transmission starts
After initial sync: ## Multi-Meter Sender Behavior
- sender fault counters are reset again once per UTC hour when the hour index changes (`HH:00 UTC` boundary)
Timezone: Implemented in `src/meter_driver.cpp` + sender path in `src/main.cpp`:
- `TIMEZONE_TZ` from `include/config.h` is applied in `time_manager`.
- Web/OLED local-time rendering uses this timezone.
- Default: `CET-1CEST,M3.5.0/2,M10.5.0/3`.
## Sender Meter Path - Meter protocol: IEC 62056-21 ASCII, Mode D style framing (`/ ... !`)
- UART settings: `9600 7E1`
- Parsed OBIS: `1-0:1.8.0`
- Conversion: floor to integer kWh (`floorf`)
Implemented by `src/meter_driver.cpp` and sender loop in `src/sender_state_machine.cpp`: Meter count is build-dependent (`include/config.h`):
- UART: `Serial2`, `GPIO34`, `9600 7E1` - Debug builds (`SERIAL_DEBUG_MODE=1`): `METER_COUNT=2`
- ESP32 RX buffer enlarged to `8192` - Prod builds (`SERIAL_DEBUG_MODE=0`): `METER_COUNT=3`
- Frame detection `/ ... !`, timeout `METER_FRAME_TIMEOUT_MS=3000`
- Single-pass OBIS line dispatch (no repeated multi-key scans per line)
- Fixed-point decimal parser (dot/comma decimals), with early-exit once all required OBIS fields are captured
- Parsed OBIS fields:
- `0-0:96.8.0*255` meter Sekundenindex (hex u32)
- `1-0:1.8.0` total energy (auto scales Wh -> kWh when unit is Wh)
- `1-0:16.7.0` total active power
- `1-0:36.7.0`, `56.7.0`, `76.7.0` phase powers
Timestamp derivation: Default RX pins:
- anchor offset: `epoch_offset = epoch_now - meter_seconds` - Meter1: `GPIO34` (`Serial2`)
- sample epoch: `ts_utc = meter_seconds + epoch_offset` - Meter2: `GPIO25` (`Serial1`)
- jump checks: rollback, wall-time delta mismatch, anchor drift - Meter3: `GPIO3` (`Serial`, prod only because debug serial is disabled)
Sender builds sparse 30-slot windows and sends every `METER_SEND_INTERVAL_MS` (`30s`).
When backlog is present (`batch_q > 1`), sender transmits additional queued batches immediately after ACK to reduce lag, while keeping stop-and-wait ACK semantics.
Sender diagnostics (serial debug mode):
- periodic structured `diag:` line with:
- meter parser counters (`ok/parse_fail/overflow/timeout`)
- meter queue stats (`depth/high-watermark/drops`)
- ACK stats (`last RTT`, `EWMA RTT`, `miss streak`, timeout/retry totals)
- sender runtime totals (`rx window ms`, `sleep ms`)
- diagnostics are local-only (serial); LoRa payload schema/fields are unchanged.
## Receiver Behavior ## Receiver Behavior
For decoded `BatchUp`: For valid `BatchUp` decode:
1. Reassemble and decode. 1. Reassemble chunks and decode payload.
2. Validate sender identity (`EXPECTED_SENDER_IDS` and payload sender ID mapping). 2. Send `AckDown` immediately.
3. Reject unknown/mismatched senders before ACK and before SD/MQTT/web updates. 3. Drop duplicate batches per sender (`batch_id` tracking).
4. Send `AckDown` promptly for accepted senders. 4. If `n==0`: treat as sync request only.
5. Track duplicates per configured sender. 5. Else convert to `MeterData`, log to SD, update web UI, publish MQTT.
6. If duplicate: update duplicate counters/time, skip data write/publish.
7. If `n==0`: sync request path only.
8. Else reconstruct each sample timestamp from `t_last + present_mask`, then:
- append to SD CSV
- publish MQTT state
- update web status and last batch table
## MQTT ## MQTT Topics and Payloads
State topic: State topic:
- `smartmeter/<device_id>/state` - `smartmeter/<device_id>/state`
@@ -124,58 +88,26 @@ State topic:
Fault topic (retained): Fault topic (retained):
- `smartmeter/<device_id>/faults` - `smartmeter/<device_id>/faults`
State JSON (`lib/dd3_legacy_core/src/json_codec.cpp`) includes: For `EnergyMulti` samples, state JSON includes:
- `id`, `ts`, `e_kwh` - `id`, `ts`
- `p_w`, `p1_w`, `p2_w`, `p3_w` - `energy1_kwh`, `energy2_kwh`, optional `energy3_kwh`
- `bat_v`, `bat_pct` - `bat_v`, `bat_pct`
- optional link: `rssi`, `snr` - optional link fields: `rssi`, `snr`
- `err_last`, `rx_reject`, `rx_reject_text` - fault/reject fields: `err_last`, `rx_reject`, `rx_reject_text` (+ non-zero counters)
- non-zero fault counters when available
Sender fault counter lifecycle: Home Assistant discovery publishing is enabled (`ENABLE_HA_DISCOVERY=true`) but still advertises legacy keys (`e_kwh`, `p_w`, `p1_w`, `p2_w`, `p3_w`) in `src/mqtt_client.cpp`.
- counters are cumulative only within the current UTC-hour window after first sync
- counters reset on first valid sender time sync and at each subsequent UTC hour boundary
Home Assistant discovery: ## Web UI, Wi-Fi, Storage
- enabled by `ENABLE_HA_DISCOVERY=true`
- publishes to `homeassistant/sensor/<device_id>/<key>/config`
- `unique_id` format is `<device_id>_<key>` (example: `dd3-F19C_energy`)
- device metadata:
- `identifiers: ["<device_id>"]`
- `name: "<device_id>"`
- `model: "DD3-LoRa-Bridge"`
- `manufacturer: "AcidBurns"` (from `HA_MANUFACTURER` in `include/config.h`)
- single source of truth: change manufacturer only in `include/config.h`
## Web UI, Wi-Fi, SD - STA config is stored in Preferences (`wifi_manager`).
- If STA/MQTT config is unavailable, receiver starts AP mode with SSID prefix `DD3-Bridge-`.
- Wi-Fi/MQTT/NTP/web-auth config is stored in Preferences. - Web auth defaults are `admin/admin` (`WEB_AUTH_DEFAULT_USER/PASS`).
- AP fallback SSID prefix: `DD3-Bridge-`. - SD logging is enabled (`ENABLE_SD_LOGGING=true`).
- Default web credentials: `admin/admin`.
- AP auth requirement is controlled by `WEB_AUTH_REQUIRE_AP` (default `true`).
- STA auth requirement is controlled by `WEB_AUTH_REQUIRE_STA` (default `true`).
Web timestamp display:
- human-facing timestamps show `epoch (HH:MM:SS TZ)` in local configured timezone.
SD CSV logging (`src/sd_logger.cpp`):
- header: `ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last`
- `ts_hms_local` is local `HH:MM:SS` derived from `TIMEZONE_TZ`
- per-day file partition uses local date from `TIMEZONE_TZ`: `/dd3/<device_id>/YYYY-MM-DD.csv`
History parser (`src/web_server.cpp`):
- accepts both:
- current layout (`ts_utc,ts_hms_local,p_w,...`)
- legacy layout (`ts_utc,p_w,...`)
- daily file lookup prefers local-date filenames and falls back to legacy UTC-date filenames for backward compatibility
- requires full numeric parse for `ts_utc` and `p_w` (rejects trailing junk)
OLED duplicate display:
- receiver sender-pages show duplicate rate as `pct (absolute)` and last duplicate as `HH:MM`.
## Build Environments ## Build Environments
From `platformio.ini`: From `platformio.ini`:
- `lilygo-t3-v1-6-1` - `lilygo-t3-v1-6-1`
- `lilygo-t3-v1-6-1-test` - `lilygo-t3-v1-6-1-test`
- `lilygo-t3-v1-6-1-868` - `lilygo-t3-v1-6-1-868`
@@ -188,12 +120,10 @@ From `platformio.ini`:
Example: Example:
```bash ```bash
python -m platformio run -e lilygo-t3-v1-6-1 ~/.platformio/penv/bin/pio run -e lilygo-t3-v1-6-1
``` ```
## Test Mode ## Test Mode
`ENABLE_TEST_MODE` replaces normal loops with `test_sender_loop` / `test_receiver_loop` (`src/test_mode.cpp`): `ENABLE_TEST_MODE` replaces normal sender/receiver loops with dedicated test loops (`src/test_mode.cpp`).
- Sender emits periodic JSON test payloads over LoRa. It sends/receives plain JSON test frames and publishes to `smartmeter/<device_id>/test`.
- Receiver decodes test payloads, updates display test codes, publishes MQTT to:
- `smartmeter/<device_id>/test`

View File

@@ -1,293 +0,0 @@
# Republish Scripts Compatibility Report
**Date:** March 11, 2026
**Focus:** Validate both Python scripts work with newest CSV exports and InfluxDB layouts
---
## Executive Summary
**BOTH SCRIPTS ARE COMPATIBLE** with current SD card CSV exports and MQTT formats.
**Test Results:**
- ✓ CSV parsing works with current `ts_hms_local` format
- ✓ Backward compatible with legacy format (no `ts_hms_local`)
- ✓ MQTT JSON output format matches device expectations
- ✓ All required fields present in current schema
- ⚠ One documentation error found and fixed
---
## Tests Performed
### 1. CSV Format Compatibility ✓
**File:** `republish_mqtt.py`, `republish_mqtt_gui.py`
**Test:** Parsing current SD logger CSV format
**Current format from device (`src/sd_logger.cpp` line 105):**
```
ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last
```
**Result:** ✓ PASS
- Both scripts check for required fields: `ts_utc`, `e_kwh`, `p_w`
- Second column (`ts_hms_local`) is NOT required - scripts ignore it gracefully
- All optional fields handled correctly
- Field parsing preserves data types correctly
### 2. Future CSV Format Extensibility ✓
**Test:** Scripts handle additional CSV columns without breaking
**Result:** ✓ PASS
- Scripts use `csv.DictReader` which only reads specified columns
- New columns (e.g., `rx_reject`, `rx_reject_text`) don't cause errors
- **Note:** New fields in CSV won't be republished unless code is updated
### 3. MQTT JSON Output Format ✓
**File:** Both scripts
**Test:** Validation that republished JSON matches device expectations
**Generated format by republish scripts:**
```json
{
"id": "F19C",
"ts": 1710076800,
"e_kwh": "1234.57",
"p_w": 5432,
"p1_w": 1800,
"p2_w": 1816,
"p3_w": 1816,
"bat_v": "4.15",
"bat_pct": 95,
"rssi": -95,
"snr": 9.25
}
```
**Result:** ✓ PASS
- Field names match device output (`src/json_codec.cpp`)
- Data types correctly converted:
- `e_kwh`, `bat_v`: strings with 2 decimal places
- `ts`, `p_w`, etc: integers
- `snr`: float
- Device subscription will correctly parse this format
### 4. Legacy CSV Format (Backward Compatibility) ✓
**Test:** Scripts still work with older CSV files without `ts_hms_local`
**Legacy format:**
```
ts_utc,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr
```
**Result:** ✓ PASS
- Matches device behavior (README: "History parser accepts both")
- Scripts will process these files without modification
### 5. InfluxDB Schema Requirements ⚠
**Files:** Both scripts (`InfluxDBHelper` class)
**Test:** Verify expected InfluxDB measurement and tag names
**Expected InfluxDB Query:**
```flux
from(bucket: "smartmeter")
|> range(start: <timestamp>, stop: <timestamp>)
|> filter(fn: (r) => r._measurement == "smartmeter" and r.device_id == "dd3-F19C")
```
**Result:** ✓ SCHEMA OK, ⚠ MISSING BRIDGE
- Measurement: `"smartmeter"`
- Tag name: `"device_id"`
- **CRITICAL NOTE:** Device firmware does NOT write directly to InfluxDB
- Device publishes to MQTT only
- Requires external bridge (Telegraf, Node-RED, Home Assistant, etc.)
- If InfluxDB is unavailable, scripts default to manual mode ✓
---
## Issues Found
### Issue 1: Documentation Error ❌
**Severity:** HIGH (documentation only, code works)
**File:** `REPUBLISH_README.md` line 84
**Description:**
Incorrect column name in documented CSV format
**Current (WRONG):**
```
ts_utc,ts_hms_utc,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last
↑↑↑↑↑ INCORRECT
```
**Should be (CORRECT):**
```
ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last
↑↑↑↑↑↑↑↑ CORRECT (local timezone)
```
**Evidence:**
- `src/sd_logger.cpp` line 105: `f.println("ts_utc,ts_hms_local,...")`
- `src/sd_logger.cpp` line 108: `String ts_hms_local = format_hms_local(data.ts_utc);`
- `README.md` line 162: Says `ts_hms_local` (correct)
**Impact:** Users reading `REPUBLISH_README.md` may be confused about CSV format
**Fix Status:** ✅ APPLIED
---
### Issue 2: CSV Fields Not Republished ⚠
**Severity:** MEDIUM (limitation, not a bug)
**Files:** Both scripts
**Description:**
CSV file contains error counter fields (`err_m`, `err_d`, `err_tx`, `err_last`) and device now sends `rx_reject`, `rx_reject_text`, but republish scripts don't read/resend these fields.
**Current behavior:**
- Republished JSON: `{id, ts, e_kwh, p_w, p1_w, p2_w, p3_w, bat_v, bat_pct, rssi, snr}`
- NOT included in republished JSON:
- `err_m` (meter errors) → CSV has this, not republished
- `err_d` (decode errors) → CSV has this, not republished
- `err_tx` (LoRa TX errors) → CSV has this, not republished
- `err_last` (last error code) → CSV has this, not republished
- `rx_reject` → Device publishes, but not in CSV
**Impact:**
- When recovering lost data from CSV, error counters won't be restored to MQTT
- These non-critical diagnostic fields are rarely needed for recovery
- Main meter data (energy, power, battery) is fully preserved
**Recommendation:**
- Current behavior is acceptable (data loss recovery focused on meter data)
- If error counters are needed, update scripts to parse/republish them
- Add note to documentation explaining what's NOT republished
**Fix Status:** ✅ DOCUMENTED (no code change needed)
---
### Issue 3: InfluxDB Auto-Detect Optional
**Severity:** LOW (feature is optional)
**Files:** Both scripts
**Description:**
Scripts expect InfluxDB for auto-detecting missing data ranges, but:
1. Device firmware doesn't write InfluxDB directly
2. Requires external MQTT→InfluxDB bridge that may not exist
3. If missing, scripts gracefully fall back to manual time selection
**Current behavior:**
- `HAS_INFLUXDB = True` or `False` based on import
- If True: InfluxDB auto-detect tab/option available
- If unavailable: Scripts still work in manual mode
- No error if InfluxDB credentials are wrong (graceful degradation)
**Impact:** None - graceful fallback exists
**Fix Status:** ✅ WORKING AS DESIGNED
---
## Data Flow Analysis
### Current CSV Export (Device → SD Card)
```
Device state (MeterData)
src/sd_logger_log_sample()
CSV format: ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last
/dd3/<device_id>/YYYY-MM-DD.csv (local timezone date)
```
### MQTT Publishing (Device → MQTT Broker)
```
Device state (MeterData)
meterDataToJson()
JSON: {id, ts, e_kwh, p_w, p1_w, p2_w, p3_w, bat_v, bat_pct, rssi, snr, err_last, rx_reject, rx_reject_text}
Topic: smartmeter/<device_id>/state
```
### CSV Republishing (CSV → MQTT)
```
CSV file
republish_csv() reads: ts_utc,e_kwh,p_w,p1_w,p2_w,p3_w,bat_v,bat_pct,rssi,snr[,err_*]
Builds JSON: {id, ts, e_kwh, p_w, p1_w, p2_w, p3_w, bat_v, bat_pct, rssi, snr}
Publishes: smartmeter/<device_id>/state
NOTE: err_m,err_d,err_tx,err_last from CSV are NOT republished
NOTE: rx_reject,rx_reject_text are not in CSV so can't be republished
```
### InfluxDB Integration (Optional)
```
Device publishes MQTT
[EXTERNAL BRIDGE - Telegraf/Node-RED/etc] (NOT PART OF FIRMWARE)
InfluxDB: measurement="smartmeter", tag device_id=<id>
republish_mqtt.py (if InfluxDB available) uses auto-detect
Otherwise: manual time range selection (always works)
```
---
## Recommendations
### ✅ IMMEDIATE ACTIONS
1. **Fix documentation** in `REPUBLISH_README.md` line 84: Change `ts_hms_utc``ts_hms_local`
### 🔄 OPTIONAL ENHANCEMENTS
2. **Add error field republishing** if needed:
- Modify CSV parsing to read: `err_m`, `err_d`, `err_tx`, `err_last`
- Add to MQTT JSON output
- Test with device error handling
3. **Document missing fields** in README:
- Explain that error counters aren't republished from CSV
- Explain that `rx_reject` field won't appear in recovered data
- Recommend manual time selection over InfluxDB if bridge is missing
4. **Add InfluxDB bridge documentation:**
- Create example Telegraf configuration
- Document MQTT→InfluxDB schema assumptions
- Add troubleshooting guide for InfluxDB queries
### TESTING
- Run `test_republish_compatibility.py` after any schema changes
- Test with actual CSV files from devices (check for edge cases)
- Verify InfluxDB queries work with deployed bridge
---
## Compatibility Matrix
| Component | Version | Compatible | Notes |
|-----------|---------|------------|-------|
| CSV Format | Current (ts_hms_local) | ✅ YES | Tested |
| CSV Format | Legacy (no ts_hms_local) | ✅ YES | Backward compatible |
| MQTT JSON Output | Current | ✅ YES | All fields matched |
| InfluxDB Schema | Standard | ✅ OPTIONAL | Requires external bridge |
| Python Version | 3.7+ | ✅ YES | No version-specific features |
| Dependencies | requirements_republish.txt | ✅ YES | All installed correctly |
---
## Conclusion
**Both Python scripts (`republish_mqtt.py` and `republish_mqtt_gui.py`) are FULLY COMPATIBLE with the newest CSV exports and device layouts.**
The only issue found is a documentation typo that should be fixed. The scripts work correctly with:
- ✅ Current CSV format from device SD logger
- ✅ Legacy CSV format for backward compatibility
- ✅ Device MQTT JSON schema
- ✅ InfluxDB auto-detect (optional, gracefully degraded if unavailable)
No code changes are required, only documentation correction.

View File

@@ -1,181 +0,0 @@
# DD3 MQTT Data Republisher - GUI Version
User-friendly graphical interface for recovering lost meter data from SD card CSV files and republishing to MQTT.
## Installation
```bash
# Install dependencies (same as CLI version)
pip install -r requirements_republish.txt
```
## Usage
### Launch the GUI
```bash
# Windows
python republish_mqtt_gui.py
# macOS/Linux
python3 republish_mqtt_gui.py
```
## Interface Overview
### Settings Tab
Configure MQTT connection and data source:
- **CSV File**: Browse and select the CSV file from your SD card
- **Device ID**: Device identifier (e.g., `dd3-F19C`)
- **MQTT Settings**: Broker address, port, username/password
- **Publish Rate**: Messages per second (1-100, default: 5)
- **Test Connection**: Verify MQTT broker is reachable
### Time Range Tab
Choose how to select the time range to republish:
#### Manual Mode (Always Available)
- Enter start and end dates/times
- Example: Start `2026-03-01` at `00:00:00`, End `2026-03-05` at `23:59:59`
- Useful when you know exactly what data is missing
#### Auto-Detect Mode (Requires InfluxDB)
- Automatically finds gaps in your InfluxDB data
- Connect to your InfluxDB instance
- Script will identify the oldest missing data range
- Republish that range automatically
### Progress Tab
Real-time status during publishing:
- **Progress Bar**: Visual indication of publishing status
- **Statistics**: Count of published/skipped samples, current rate
- **Log Output**: Detailed logging of all actions
## Step-by-step Example
1. **Prepare CSV File**
- Extract CSV file from SD card
- Example path: `D:\dd3-F19C\2026-03-09.csv`
2. **Launch GUI**
```bash
python republish_mqtt_gui.py
```
3. **Settings Tab**
- Click "Browse..." and select the CSV file
- Enter Device ID: `dd3-F19C`
- MQTT Broker: `192.168.1.100` (or your broker address)
- Test connection to verify MQTT is working
4. **Time Range Tab**
- **Manual Mode**: Enter dates you want to republish
- Start: `2026-03-09` / `08:00:00`
- End: `2026-03-09` / `18:00:00`
- **Or Auto-Detect**: Fill InfluxDB settings if available
5. **Progress Tab**
- View real-time publishing progress
- Watch the log for detailed status
6. **Start**
- Click "Start Publishing" button
- Monitor progress in real-time
- Success message when complete
## Tips
### CSV File Location
On Windows with SD card reader:
- Drive letter shows up (e.g., `D:\`)
- Path is usually: `D:\dd3\[DEVICE-ID]\[DATE].csv`
On Linux with SD card:
- Example: `/mnt/sd/dd3/dd3-F19C/2026-03-09.csv`
### Finding Device ID
- Displayed on device's OLED screen
- Also in CSV directory names on SD card
- Format: `dd3-XXXX` where XXXX is hex device short ID
### Rate Limiting
- **Conservative** (1-2 msg/sec): For unreliable networks or busy brokers
- **Default** (5 msg/sec): Recommended, safe for most setups
- **Fast** (10+ msg/sec): Only if you know your broker can handle it
### InfluxDB Auto-Detect
Requires:
- InfluxDB running and accessible
- Valid API token
- Correct organization and bucket names
- Data already stored in InfluxDB bucket
If InfluxDB unavailable: Fall back to manual time selection
## Troubleshooting
### "Could not connect to MQTT broker"
- Check broker address and port
- Verify firewall allows connection
- Check if broker is running
- Try "Test Connection" button
### "CSV file not found"
- Verify file path is correct
- Try re-selecting file with Browse button
- Ensure file is readable
### "0 samples published"
- Time range may not match CSV data
- Try wider time range
- Check CSV file contains data
- Verify timestamps are Unix format
### "InfluxDB connection error"
- Check InfluxDB URL is running
- Verify API token is valid
- Check organization and bucket name
- Try accessing InfluxDB web UI manually
### GUI is slow or unresponsive
- This is normal during MQTT publishing
- GUI updates in background
- Wait for operation to complete
- Check Progress tab for live updates
## Keyboard Shortcuts
- Tab: Move to next field
- Enter: Start publishing from most tabs
- Ctrl+C: Exit (if launched from terminal)
## File Structure
```
republish_mqtt.py → Command-line version
republish_mqtt_gui.py → GUI version (this)
requirements_republish.txt → Python dependencies
REPUBLISH_README.md → Full documentation
```
Use the **GUI** if you prefer point-and-click interface.
Use the **CLI** if you want to automate or run in scripts.
## Platform Support
**Windows 10/11** - Native support
**macOS** - Works with Python 3.7+
**Linux** (Ubuntu, Debian, Fedora) - Works with Python 3.7+
All platforms use tkinter (included with Python).
## Performance
Typical times on a standard PC:
- 1 day of data (~2800 samples): ~9-10 minutes at 5 msg/sec
- 1 week of data (~19,600 samples): ~65 minutes at 5 msg/sec
Time = (Number of Samples) / (Rate in msg/sec)
## License
Same as DD3 project

View File

@@ -1,244 +0,0 @@
# DD3 MQTT Data Republisher
Standalone Python script to recover and republish lost meter data from SD card CSV files to MQTT.
## Features
- **Rate-limited publishing**: Sends 5 messages/second by default (configurable) to prevent MQTT broker overload
- **Two modes of operation**:
- **Auto-detect**: Connect to InfluxDB to find gaps in recorded data
- **Manual selection**: User specifies start/end time range
- **Cross-platform**: Works on Windows, macOS, and Linux
- **CSV parsing**: Reads SD card CSV export format and converts to MQTT JSON
- **Interactive mode**: Walks user through configuration step-by-step
- **Command-line mode**: Scripting and automation friendly
## Installation
### Prerequisites
- Python 3.7 or later
### Setup
```bash
# Install dependencies
pip install -r requirements_republish.txt
```
### Optional: InfluxDB support
To enable automatic gap detection via InfluxDB, `influxdb-client` will be automatically installed. If you want to use the fallback manual mode only, you can skip this (though it's included in requirements).
## Usage
### Interactive Mode (Recommended for first use)
```bash
python republish_mqtt.py -i
```
The script will prompt you for:
1. CSV file location (with auto-discovery)
2. Device ID
3. MQTT broker settings
4. Time range (manual or auto-detect from InfluxDB)
### Command Line Mode
#### Republish a specific time range:
```bash
python republish_mqtt.py \
-f path/to/data.csv \
-d dd3-F19C \
--mqtt-broker 192.168.1.100 \
--mqtt-user admin \
--mqtt-pass password \
--from-time "2026-03-01" \
--to-time "2026-03-05"
```
#### Auto-detect missing data with InfluxDB:
```bash
python republish_mqtt.py \
-f path/to/data.csv \
-d dd3-F19C \
--mqtt-broker 192.168.1.100 \
--influxdb-url http://localhost:8086 \
--influxdb-token mytoken123 \
--influxdb-org myorg \
--influxdb-bucket smartmeter
```
#### Different publish rate (slower for stability):
```bash
python republish_mqtt.py \
-f data.csv \
-d dd3-F19C \
--mqtt-broker localhost \
--rate 2 # 2 messages per second instead of 5
```
## CSV Format
The script expects CSV files exported from the SD card with this header:
```
ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last
```
Note: `ts_hms_local` is the local time (HH:MM:SS) in your configured timezone, not UTC. The `ts_utc` field contains the Unix timestamp in UTC.
Each row is one meter sample. The script converts these to MQTT JSON format:
```json
{
"id": "F19C",
"ts": 1710076800,
"e_kwh": "1234.56",
"p_w": 5432,
"p1_w": 1800,
"p2_w": 1816,
"p3_w": 1816,
"bat_v": "4.15",
"bat_pct": 95,
"rssi": -95,
"snr": 9.25
}
```
## How It Works
### Manual Mode (Fallback)
1. User specifies a time range (start and end timestamps)
2. Script reads CSV file
3. Filters samples within the time range
4. Publishes to MQTT topic: `smartmeter/{device_id}/state`
5. Respects rate limiting (5 msg/sec by default)
### Auto-Detect Mode (with InfluxDB)
1. Script connects to InfluxDB
2. Queries for existing data in the specified bucket
3. Identifies gaps (time ranges with no data)
4. Shows gaps to user
5. Republishes the first (oldest) gap from CSV file
6. User can re-run to fill subsequent gaps
## Rate Limiting
By default, the script publishes 5 messages per second. This is:
- **Safe for most MQTT brokers** (no risk of overload)
- **Fast enough** (fills data in < 5 minute for typical daily data)
- **Adjustable** with `--rate` parameter
Examples:
- `--rate 1`: 1 msg/sec (very conservative)
- `--rate 5`: 5 msg/sec (default, recommended)
- `--rate 10`: 10 msg/sec (only if broker can handle it)
## Device ID
The device ID is used to determine the MQTT topic. It appears on the device display and in the CSV directory structure:
- Example: `dd3-F19C`
- Short ID (last 4 characters): `F19C`
You can use either form; the script extracts the short ID for the MQTT topic.
## Time Format
Dates can be specified in multiple formats:
- `2026-03-01` (YYYY-MM-DD)
- `2026-03-01 14:30:00` (YYYY-MM-DD HH:MM:SS)
- `14:30:00` (HH:MM:SS - uses today's date)
- `14:30` (HH:MM - uses today's date)
## Examples
### Scenario 1: Recover data from yesterday
```bash
python republish_mqtt.py -i
# Select CSV file → dd3-F19C_2026-03-09.csv
# Device ID → dd3-F19C
# MQTT broker → 192.168.1.100
# Choose manual time selection
# From → 2026-03-09 00:00:00
# To → 2026-03-10 00:00:00
```
### Scenario 2: Find and fill gaps automatically
```bash
python republish_mqtt.py \
-f path/to/csv/dd3-F19C/*.csv \
-d dd3-F19C \
--mqtt-broker mosquitto.example.com \
--mqtt-user admin --mqtt-pass changeme \
--influxdb-url http://influxdb:8086 \
--influxdb-token mytoken \
--influxdb-org myorg
```
### Scenario 3: Slow publishing for unreliable connection
```bash
python republish_mqtt.py -i --rate 1
```
## Troubleshooting
### "Cannot connect to MQTT broker"
- Check broker address and port
- Verify firewall rules
- Check username/password if required
- Test connectivity: `ping broker_address`
### "No data in CSV file"
- Verify CSV file path exists
- Check that CSV has data rows (not just header)
- Ensure device ID matches CSV directory name
### "InfluxDB query error"
- Verify InfluxDB is running and accessible
- Check API token validity
- Verify organization name
- Check bucket contains data
### "Published 0 samples"
- CSV file may be empty
- Time range may not match any data in CSV
- Try a wider date range
- Check that CSV timestamps are in Unix format
## Performance
Typical performance on a standard PC:
- **CSV parsing**: ~10,000 rows/second
- **MQTT publishing** (at 5 msg/sec): 1 day's worth of data (~2800 samples) takes ~9 minutes
For large files (multiple weeks of data), the script may take longer. This is expected and safe.
## Advanced: Scripting
For automation, you can use command-line mode with environment variables or config files:
```bash
#!/bin/bash
# Recover last 3 days of data
DEVICE_ID="dd3-F19C"
CSV_DIR="/mnt/sd/dd3/$DEVICE_ID"
FROM=$(date -d '3 days ago' '+%Y-%m-%d')
TO=$(date '+%Y-%m-%d')
python republish_mqtt.py \
-f "$(ls -t $CSV_DIR/*.csv | head -1)" \
-d "$DEVICE_ID" \
--mqtt-broker mqtt.example.com \
--mqtt-user admin \
--mqtt-pass changeme \
--from-time "$FROM" \
--to-time "$TO" \
--rate 5
```
## License
Same as DD3 project
## Support
For issues or feature requests, check the project repository.

View File

@@ -1,200 +0,0 @@
# Python Scripts Compatibility Check - Summary
## ✅ VERDICT: Both Scripts Work with Newest CSV and InfluxDB Formats
**Tested:** `republish_mqtt.py` and `republish_mqtt_gui.py`
**Test Date:** March 11, 2026
**Result:** 5/5 compatibility tests passed
---
## Quick Reference
| Check | Status | Details |
|-------|--------|---------|
| CSV Parsing | ✅ PASS | Reads current `ts_utc,ts_hms_local,...` format correctly |
| CSV Backward Compat | ✅ PASS | Also works with legacy format (no `ts_hms_local`) |
| MQTT JSON Output | ✅ PASS | Generated JSON matches device expectations |
| Future Fields | ✅ PASS | Scripts handle new CSV columns without breaking |
| InfluxDB Schema | ✅ PASS | Query format matches expected schema (optional feature) |
| **Documentation** | ⚠️ FIXED | Corrected typo: `ts_hms_utc``ts_hms_local` |
| **Syntax Errors** | ✅ PASS | Both scripts compile cleanly |
---
## Test Results Summary
### 1. CSV Format Compatibility ✅
**Current device CSV (sd_logger.cpp):**
```
ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last
```
- Both scripts check for required fields: `ts_utc`, `e_kwh`, `p_w`
- Optional fields are read gracefully when present
- Field types are correctly converted
-**Scripts work without modification**
### 2. MQTT JSON Output Format ✅
**Republished JSON matches device format:**
```json
{
"id": "F19C",
"ts": 1710076800,
"e_kwh": "1234.57",
"p_w": 5432,
"p1_w": 1800,
"p2_w": 1816,
"p3_w": 1816,
"bat_v": "4.15",
"bat_pct": 95,
"rssi": -95,
"snr": 9.25
}
```
- All required fields present
- Data types and formatting match expectations
- Compatible with MQTT subscribers and Home Assistant
-**No changes needed**
### 3. Backward Compatibility ✅
- Legacy CSV files (without `ts_hms_local`) still work
- Scripts ignore columns they don't understand
- Can process CSV files from both old and new firmware versions
-**Future-proof**
### 4. InfluxDB Auto-Detect ✅
- Scripts expect: measurement `"smartmeter"`, tag `"device_id"`
- Auto-detect is optional (falls back to manual time selection)
- ⚠️ NOTE: Device firmware doesn't write InfluxDB directly
- Requires external bridge (Telegraf, Node-RED, etc.)
- If bridge missing, manual mode works fine
-**Graceful degradation**
---
## Issues Found
### 🔴 Issue 1: Documentation Error (FIXED)
**Severity:** HIGH (documentation error, code is fine)
**File:** `REPUBLISH_README.md` line 84
**Problem:** Header listed as `ts_hms_utc` but actual device writes `ts_hms_local`
**What Changed:**
- ❌ Before: `ts_utc,ts_hms_utc,p_w,...` (typo)
- ✅ After: `ts_utc,ts_hms_local,p_w,...` (correct)
**Reason:** `ts_hms_local` is local time in your configured timezone, not UTC. The `ts_utc` field is the actual UTC timestamp.
---
### ⚠️ Issue 2: Error Fields Not Republished (EXPECTED LIMITATION)
**Severity:** LOW (not a bug, limitation of feature)
**What's missing:**
- CSV contains: `err_m`, `err_d`, `err_tx`, `err_last` (error counters)
- Republished JSON doesn't include these fields
- **Impact:** Error diagnostics won't be restored from recovered CSV
**Why:**
- Error counters are diagnostic/status info, not core meter data
- Main recovery goal is saving energy/power readings (which ARE included)
- Error counters reset at UTC hour boundaries anyway
**Status:** ✅ DOCUMENTED in report, no code change needed
---
### Issue 3: InfluxDB Bridge Required (EXPECTED)
**Severity:** INFORMATIONAL
**What it means:**
- Device publishes to MQTT only
- InfluxDB auto-detect requires external MQTT→InfluxDB bridge
- Examples: Telegraf, Node-RED, Home Assistant
**Status:** ✅ WORKING AS DESIGNED - manual mode always available
---
## What Was Tested
### Test Suite: `test_republish_compatibility.py`
- ✅ CSV parser can read current device format
- ✅ Scripts handle new fields gracefully
- ✅ MQTT JSON output format validation
- ✅ Legacy CSV format compatibility
- ✅ InfluxDB schema requirements
**Run test:** `python test_republish_compatibility.py`
---
## Files Modified
1. **REPUBLISH_README.md** - Fixed typo in CSV header documentation
2. **REPUBLISH_COMPATIBILITY_REPORT.md** - Created detailed compatibility analysis (this report)
3. **test_republish_compatibility.py** - Created test suite for future validation
---
## Recommendations
### ✅ Done (No Action Needed)
- Both scripts already work correctly
- Test suite created for future validation
- Documentation error fixed
### 🔄 Optional Enhancements (For Later)
1. Update scripts to parse/republish error fields if needed
2. Document InfluxDB bridge setup (Telegraf example)
3. Add more edge case tests (missing fields, malformed data, etc.)
### 📋 For Users
- Keep using both scripts as-is
- Use **manual time selection** if InfluxDB is unavailable
- Refer to updated REPUBLISH_README.md for correct CSV format
---
## Technical Details
### CSV Processing Flow
```
1. Read CSV with csv.DictReader
2. Check for required fields: ts_utc, e_kwh, p_w
3. Convert types:
- ts_utc → int (seconds)
- e_kwh → float → formatted as "X.XX" string
- p_w → int (rounded)
- Energy/power values → integers or floats
4. Publish to MQTT topic: smartmeter/{device_id}/state
```
### MQTT JSON Format
- Strings: `e_kwh`, `bat_v` (formatted with 2 decimal places)
- Integers: `ts`, `p_w`, `p1_w`, `p2_w`, `p3_w`, `bat_pct`, `rssi`, `id`
- Floats: `snr`
### Device Schema Evolution
- ✅ Device now sends: `rx_reject`, `rx_reject_text` (new)
- ⚠️ These don't go to CSV, so can't be republished
- ✅ All existing fields preserved
---
## Conclusion
**Both republish scripts are production-ready and fully compatible with**:
- ✅ Current SD card CSV exports
- ✅ Device MQTT publishers
- ✅ InfluxDB optional auto-detect
- ✅ Home Assistant integrations
- ✅ Legacy data files (backward compatible)
No code changes required. Only documentation correction applied.

View File

@@ -1,528 +0,0 @@
# Firmware Requirements (Rust Port Preparation)
## 1. Scope
This document defines the behavior that must be preserved when recreating this firmware in another language (target: Rust).
It is based on the current `lora-refactor` code state and captures:
- functional behavior
- protocol/data contracts
- module and function responsibilities
- runtime state-machine requirements
Function names below are C++ references. Rust naming/layout may differ, but the behavior must remain equivalent.
## 2. Refactored Architecture Baseline
The `lora-refactor` branch split role-specific runtime from the previous large `main.cpp` into dedicated modules while keeping a single firmware image:
- `src/main.cpp` is a thin coordinator that:
- detects role and initializes shared platform subsystems,
- prepares role module configuration,
- calls `begin()` once,
- delegates runtime in `loop()`.
- sender runtime ownership:
- `src/sender_state_machine.h`
- `src/sender_state_machine.cpp`
- receiver runtime ownership:
- `src/receiver_pipeline.h`
- `src/receiver_pipeline.cpp`
- receiver shared mutable state used by setup wiring and runtime:
- `src/app_context.h` (`ReceiverSharedState`)
Sender state machine invariants must remain behavior-equivalent:
- single inflight batch at a time,
- ACK acceptance only for matching `batch_id`,
- retry bounded by `BATCH_MAX_RETRIES`,
- queue depth bounded by `BATCH_QUEUE_DEPTH`.
## 3. System-Level Requirements
- Role selection:
- `Sender` when `GPIO14` reads HIGH.
- `Receiver` when `GPIO14` reads LOW.
- Device identity:
- derive `short_id` from MAC bytes 4/5.
- canonical `device_id` format: `dd3-XXXX` uppercase hex.
- LoRa transport:
- frame format: `[msg_kind][short_id_be][payload][crc16_ccitt]`.
- reject invalid CRC/msg-kind/length.
- Payload codec:
- schema `3` with `present_mask` (30-bit sparse second map).
- support `n==0` sync-request packets.
- Time bootstrap guardrail:
- sender must not run normal sampling/transmit until valid ACK time received.
- accept ACK time only if `time_valid=1` and `epoch >= MIN_ACCEPTED_EPOCH_UTC`.
- sender fault counters reset when first valid sync is accepted.
- after first sync, sender fault counters reset again at each UTC hour boundary.
- Sampling/transmit cadence:
- sender sample cadence 1 Hz.
- sender batch cadence 30 s.
- when sender backlog exists (`batch_count > 1`) and no ACK is pending, sender performs immediate catch-up sends (still stop-and-wait with one inflight batch).
- sync-request cadence 15 s while unsynced.
- sender retransmits reuse cached encoded payload bytes for same inflight batch.
- sender ACK receive window is adaptive from airtime + observed ACK RTT, with expanded second window on miss.
- Receiver behavior:
- decode/reconstruct sparse timestamps.
- ACK accepted batches promptly.
- reject unknown/mismatched sender identities before ACK and before SD/MQTT/web updates.
- update MQTT, web status, SD logging.
- Persistence:
- Wi-Fi/MQTT/NTP/web credentials in Preferences namespace `dd3cfg`.
- Web auth defaults:
- `WEB_AUTH_REQUIRE_STA=true`
- `WEB_AUTH_REQUIRE_AP=true`
- Web and display time rendering:
- local timezone from `TIMEZONE_TZ`.
- Sender diagnostics:
- structured sender diagnostics are emitted to serial debug output only.
- diagnostics do not change LoRa payload schema or remap payload fields.
- SD logging:
- CSV columns include both `ts_utc` and `ts_hms_local`.
- per-day CSV file partitioning uses local date (`TIMEZONE_TZ`) under `/dd3/<device_id>/YYYY-MM-DD.csv`.
- history day-file resolution prefers local-date filenames and falls back to legacy UTC-date filenames.
- history parser supports both current (`ts_utc,ts_hms_local,p_w,...`) and legacy (`ts_utc,p_w,...`) layouts.
## 4. Protocol and Data Contracts
- `LoraMsgKind`:
- `BatchUp=0`
- `AckDown=1`
- `AckDown` payload fixed length `7` bytes:
- `[flags:1][batch_id_be:2][epoch_utc_be:4]`
- `flags bit0 = time_valid`
- sender acceptance window is implementation-adaptive; payload format stays unchanged.
- `BatchInput`:
- fixed arrays length `30` (`energy_wh`, `p1_w`, `p2_w`, `p3_w`)
- `present_mask` must satisfy: only low 30 bits used and `bit_count == n`
- Timestamp constraints:
- receiver rejects decoded data whose timestamps are below `MIN_ACCEPTED_EPOCH_UTC`
- CSV header (current required layout):
- `ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last`
- Home Assistant discovery contract:
- topic: `homeassistant/sensor/<device_id>/<key>/config`
- `unique_id`: `<device_id>_<key>`
- `device.identifiers`: `["<device_id>"]`
- `device.name`: `<device_id>`
- `device.model`: `DD3-LoRa-Bridge`
- `device.manufacturer`: `AcidBurns`
- drift guards:
- canonical value is `HA_MANUFACTURER` in `include/config.h`,
- compile-time lock via `static_assert` in `include/config.h`,
- script guard `test/check_ha_manufacturer.ps1`,
- smoke test guard `test/test_refactor_smoke/test_refactor_smoke.cpp`.
## 5. Module and Function Requirements
## `src/config.cpp`
- `DeviceRole detect_role()`
- configure role pin input pulldown and map to sender/receiver role.
## `lib/dd3_legacy_core/src/data_model.cpp`
- `void init_device_ids(uint16_t&, char*, size_t)`
- read MAC, derive short ID, format canonical device ID.
- `const char *rx_reject_reason_text(RxRejectReason)`
- stable mapping for diagnostics and payloads.
## `lib/dd3_legacy_core/src/html_util.cpp`
- `String html_escape(const String&)`
- escape `& < > " '`.
- `String url_encode_component(const String&)`
- percent-encode non-safe characters.
- `bool sanitize_device_id(const String&, String&)`
- accept `XXXX` or `dd3-XXXX`; reject path traversal, `%`, invalid hex.
- Internal helpers to preserve behavior:
- `is_hex_char`
- `to_upper_hex4`
## `src/meter_driver.cpp`
- `void meter_init()`
- configure `Serial2` at `9600 7E1`, RX pin `PIN_METER_RX`, RX buffer size `8192` on ESP32.
- `bool meter_poll_frame(const char *&, size_t&)`
- incremental frame collector with start `/`, end `!`, timeout, overflow handling.
- `bool meter_parse_frame(const char*, size_t, MeterData&)`
- parse OBIS values and set meter data fields.
- `bool meter_read(MeterData&)`
- compatibility wrapper around poll+parse.
- `void meter_get_stats(MeterDriverStats&)`
- expose parser/UART counters for sender-local diagnostics.
- Internal parse helpers to preserve numeric behavior:
- `detect_obis_field`
- `parse_decimal_fixed`
- `parse_obis_ascii_payload_value`
- `parse_obis_ascii_unit_scale`
- `hex_nibble`
- `parse_obis_hex_payload_u32`
- `meter_debug_log`
## `src/power_manager.cpp`
- `void power_sender_init()`
- sender low-power setup (CPU freq, Wi-Fi/BT off, ADC setup).
- `void power_receiver_init()`
- receiver power setup.
- `void power_configure_unused_pins_sender()`
- configure known-unused pins with pulldown.
- `void read_battery(MeterData&)`
- averaged ADC conversion and voltage calibration.
- `uint8_t battery_percent_from_voltage(float)`
- LUT + interpolation.
- `void light_sleep_ms(uint32_t)`
- timer-based light sleep.
- `void go_to_deep_sleep(uint32_t)`
- timer-based deep sleep.
## `src/time_manager.cpp`
- `void time_receiver_init(const char*, const char*)`
- configure NTP servers and timezone env.
- `uint32_t time_get_utc()`
- return epoch or `0` when not plausible.
- updates "clock plausible" state independently from sync state.
- `bool time_is_synced()`
- true only after explicit sync signals (NTP callback/status or trusted `time_set_utc`).
- `void time_set_utc(uint32_t)`
- set system time and sync flags.
- `void time_get_local_hhmm(char*, size_t)`
- timezone-based local `HH:MM` output.
- `uint32_t time_get_last_sync_utc()`
- `uint32_t time_get_last_sync_age_sec()`
- Internal behavior-critical helpers:
- `note_last_sync`
- `mark_synced`
- `ntp_sync_notification_cb`
- `ensure_timezone_set`
## `src/lora_transport.cpp`
- `void lora_init()`
- initialize SX1276 with configured LoRa params.
- `bool lora_send(const LoraPacket&)`
- frame pack + CRC append + transmit.
- `bool lora_receive(LoraPacket&, uint32_t timeout_ms)`
- parse frame, validate, return metadata including RSSI/SNR.
- `RxRejectReason lora_get_last_rx_reject_reason()`
- consume-and-clear reject reason.
- `bool lora_get_last_rx_signal(int16_t&, float&)`
- access last RX signal snapshot.
- `void lora_idle()`
- `void lora_sleep()`
- `void lora_receive_continuous()`
- `bool lora_receive_window(LoraPacket&, uint32_t)`
- `uint32_t lora_airtime_ms(size_t)`
- compute packet airtime from SF/BW/CR/preamble.
- Internal behavior-critical helpers:
- `note_reject`
- `lora_build_frame`, `lora_parse_frame`, `lora_crc16_ccitt` (implemented in `lib/dd3_transport_logic/src/lora_frame_logic.cpp`)
## `lib/dd3_legacy_core/src/payload_codec.cpp`
- `bool encode_batch(const BatchInput&, uint8_t*, size_t, size_t*)`
- schema v3 encoder with metadata, sparse present mask, delta coding.
- `bool decode_batch(const uint8_t*, size_t, BatchInput*)`
- strict schema/magic/flags decode + bounds checks.
- Varint primitives:
- `uleb128_encode`, `uleb128_decode`
- `zigzag32`, `unzigzag32`
- `svarint_encode`, `svarint_decode`
- Internal helpers:
- `write_u16_le`, `write_u32_le`
- `read_u16_le`, `read_u32_le`
- `ensure_capacity`
- `bit_count32`
- Optional self-test:
- `payload_codec_self_test` (when `PAYLOAD_CODEC_TEST`).
## `lib/dd3_legacy_core/src/json_codec.cpp`
- `bool meterDataToJson(const MeterData&, String&)`
- create MQTT state JSON with stable field semantics.
- Internal numeric formatting helpers:
- `round2`
- `round_to_i32`
- `short_id_from_device_id`
- `format_float_2`
- `set_int_or_null`
## `src/mqtt_client.cpp`
- `void mqtt_init(const WifiMqttConfig&, const char*)`
- `void mqtt_loop()`
- `bool mqtt_is_connected()`
- `bool mqtt_publish_state(const MeterData&)`
- `bool mqtt_publish_faults(const char*, const FaultCounters&, FaultType, uint32_t)`
- `bool mqtt_publish_discovery(const char*)`
- `bool mqtt_publish_test(const char*, const String&)` (test mode only)
- Internal behavior-critical helpers:
- `fault_text`
- `mqtt_connect`
- `publish_discovery_sensor`
- discovery payload uses canonical device identity fields and `manufacturer=AcidBurns`
## `src/wifi_manager.cpp`
- `void wifi_manager_init()`
- `bool wifi_load_config(WifiMqttConfig&)`
- `bool wifi_save_config(const WifiMqttConfig&)`
- returns `false` when any Preferences write/verify fails.
- `bool wifi_connect_sta(const WifiMqttConfig&, uint32_t timeout_ms)`
- `void wifi_start_ap(const char*, const char*)`
- `bool wifi_is_connected()`
- `String wifi_get_ssid()`
## `src/sd_logger.cpp`
- `void sd_logger_init()`
- `bool sd_logger_is_ready()`
- `void sd_logger_log_sample(const MeterData&, bool include_error_text)`
- append/create per-day CSV under `/dd3/<device_id>/YYYY-MM-DD.csv` using local calendar date from `TIMEZONE_TZ`.
- Internal behavior-critical helpers:
- `fault_text`
- `ensure_dir`
- `format_date_local`
- `format_hms_local`
## `src/display_ui.cpp`
Public display API that must remain behavior-equivalent:
- `display_power_down`
- `display_init`
- `display_set_role`
- `display_set_self_ids`
- `display_set_sender_statuses`
- `display_set_last_meter`
- `display_set_last_read`
- `display_set_last_tx`
- `display_set_sender_queue`
- `display_set_sender_batches`
- `display_set_last_error`
- `display_set_receiver_status`
- `display_set_test_code` (test mode)
- `display_set_test_code_for_sender` (test mode)
- `display_tick`
Internal rendering helpers to preserve behavior:
- `oled_set_power`
- `age_seconds`
- `round_power_w`
- `render_last_error_line`
- `render_last_sync_line`
- `render_sender_status`
- `render_sender_measurement`
- `render_receiver_status`
- `render_receiver_sender`
## `src/web_server.cpp`
Public web API:
- `web_server_set_config`
- `web_server_set_sender_faults`
- `web_server_set_last_batch`
- `web_server_begin_ap`
- `web_server_begin_sta`
- `web_server_loop`
Internal route/state functions to preserve behavior:
- `format_local_hms`
- `format_epoch_local_hms`
- `timestamp_age_seconds`
- `round_power_w`
- `auth_required`
- `fault_text`
- `ensure_auth`
- `html_header`
- `html_footer`
- `format_faults`
- `sanitize_sd_download_path`
- `checkbox_checked`
- `sanitize_history_device_id`
- `sanitize_download_filename`
- `history_reset`
- `history_date_from_epoch_local`
- `history_date_from_epoch_utc` (legacy fallback mapping)
- `history_open_next_file`
- `history_parse_line`
- `history_tick`
- `render_sender_block`
- `append_sd_listing`
- `handle_root`
- `handle_wifi_get`
- `handle_wifi_post`
- `handle_sender`
- `handle_manual`
- `handle_history_start`
- `handle_history_data`
- `handle_sd_download`
## `src/test_mode.cpp` (`ENABLE_TEST_MODE`)
- `test_sender_loop`
- periodic JSON test frame transmit.
- `test_receiver_loop`
- decode test JSON, update display test markers, publish MQTT test topic.
## `src/app_context.h`
- `ReceiverSharedState`
- retains receiver-owned shared status/fault/discovery state used by setup wiring and runtime.
## `src/sender_state_machine.h/.cpp` (Sender Runtime)
Public API:
- `SenderStateMachineConfig`
- `SenderStats`
- `SenderStateMachine::begin(...)`
- `SenderStateMachine::loop()`
- `SenderStateMachine::stats()`
Behavior-critical internals (migrated from pre-refactor `main.cpp`) that must remain equivalent:
- Logging/utilities:
- `serial_debug_printf`
- `bit_count32`
- `abs_diff_u32`
- Meter-time anchoring and ingest:
- `meter_time_update_snapshot`
- `set_last_meter_sample`
- `parse_meter_frame_sample`
- `meter_queue_push_latest`
- `meter_reader_task_entry`
- `meter_reader_start`
- `meter_reader_pump`
- Sender state/data handling:
- `update_battery_cache`
- `battery_sample_due`
- `batch_queue_drop_oldest`
- `sender_note_rx_reject`
- `sender_log_diagnostics`
- `batch_queue_peek`
- `batch_queue_enqueue`
- `reset_build_counters`
- `append_meter_sample`
- `last_sample_ts`
- Sender fault handling:
- `note_fault`
- `clear_faults`
- `sender_reset_fault_stats`
- `sender_reset_fault_stats_on_first_sync`
- `sender_reset_fault_stats_on_hour_boundary`
- Sender-specific encoding/scheduling:
- `kwh_to_wh_from_float`
- `float_to_i16_w`
- `float_to_i16_w_clamped`
- `battery_mv_from_voltage`
- `compute_batch_ack_timeout_ms`
- `send_batch_payload`
- `invalidate_inflight_encode_cache`
- `prepare_inflight_from_queue`
- `send_inflight_batch`
- `send_meter_batch`
- `send_sync_request`
- `resend_inflight_batch`
- `finish_inflight_batch`
- `sender_loop`
## `src/receiver_pipeline.h/.cpp` (Receiver Runtime)
Public API:
- `ReceiverPipelineConfig`
- `ReceiverStats`
- `ReceiverPipeline::begin(...)`
- `ReceiverPipeline::loop()`
- `ReceiverPipeline::stats()`
Behavior-critical internals (migrated from pre-refactor `main.cpp`) that must remain equivalent:
- Receiver setup/state:
- `init_sender_statuses`
- Fault handling/publish:
- `note_fault`
- `clear_faults`
- `age_seconds`
- `counters_changed`
- `publish_faults_if_needed`
- Binary helpers and ID conversion:
- `write_u16_le`
- `read_u16_le`
- `write_u16_be`
- `read_u16_be`
- `write_u32_be`
- `read_u32_be`
- `sender_id_from_short_id`
- `short_id_from_sender_id`
- LoRa RX/TX pipeline:
- `compute_batch_rx_timeout_ms`
- `send_batch_ack`
- `reset_batch_rx`
- `process_batch_packet`
- `receiver_loop`
## `src/main.cpp` (Thin Coordinator)
Current core orchestration requirements:
- `setup`
- initialize shared subsystems once,
- force-link `dd3_legacy_core` before first legacy-core symbol use (`dd3_legacy_core_force_link()`),
- instantiate role config and call role `begin`,
- keep role-specific runtime out of this file.
- `loop`
- delegate to `SenderStateMachine::loop()` or `ReceiverPipeline::loop()` by role.
- Watchdog wrapper remains in coordinator:
- `watchdog_init`
- `watchdog_kick`
## 6. Rust Porting Constraints and Recommendations
- Preserve wire compatibility first:
- LoRa frame byte layout, CRC16, ACK format, payload schema v3.
- sender optimization changes must not alter payload field meanings.
- Preserve persistent storage keys:
- Preferences keys (`ssid`, `pass`, `mqhost`, `mqport`, `mquser`, `mqpass`, `ntp1`, `ntp2`, `webuser`, `webpass`, `valid`).
- Preserve timing constants and acceptance thresholds:
- bootstrap guardrail, retry counts, schedule intervals, min accepted epoch.
- Preserve CSV output layout exactly:
- consumers (history parser and external tooling) depend on it.
- preserve reader compatibility for both current and legacy layouts.
- Preserve enum meanings:
- `FaultType`, `RxRejectReason`, `LoraMsgKind`.
Suggested Rust module split:
- `config`, `ids`, `meter`, `power`, `time`, `lora_transport`, `payload_codec`, `sender_state_machine`, `receiver_pipeline`, `app_context`, `mqtt`, `wifi_cfg`, `sd_log`, `web`, `display`, `runtime`.
Suggested Rust primitives:
- async task for meter reader + bounded channel (drop-oldest behavior).
- explicit state structs for sender/receiver loops.
- serde-free/manual codec for wire compatibility where needed.
## 7. Port Validation Checklist
- Sender unsynced boot sends only sync requests.
- ACK time bootstrap unlocks normal sender sampling.
- Sparse present-mask encode/decode round-trip matches C++.
- Receiver reconstructs timestamps correctly for gaps.
- Duplicate batch handling updates counters and suppresses duplicate publish/log.
- Web UI shows `epoch (HH:MM:SS TZ)` local time.
- SD CSV header/fields match expected order.
- SD daily files roll over at local midnight (`TIMEZONE_TZ`), not UTC midnight.
- History endpoint reads current and legacy CSV layouts successfully.
- History endpoint can read both local-date and legacy UTC-date day filenames.
- MQTT state/fault payload fields match existing names and semantics.
## 8. Port Readiness Audit (2026-02-20)
Evidence checked on `lora-refactor`:
- build verification:
- `pio run -e lilygo-t3-v1-6-1`
- `pio run -e lilygo-t3-v1-6-1-test`
- drift guard verification:
- `powershell -ExecutionPolicy Bypass -File test/check_ha_manufacturer.ps1`
- refactor ownership verification:
- sender state machine state/API present in `src/sender_state_machine.h/.cpp`,
- receiver pipeline API present in `src/receiver_pipeline.h/.cpp`,
- coordinator remains thin in `src/main.cpp`.
Findings:
- Requirements are functionally met by current C++ baseline from static/code-build checks.
- The old requirement ownership under `src/main.cpp` was stale; this document now maps that behavior to `sender_state_machine` and `receiver_pipeline`.
- No wire/protocol or persistence contract drift found in this audit.

View File

@@ -1,109 +0,0 @@
# ✅ Python Scripts Compatibility Check - Quick Result
**Status:** BOTH SCRIPTS ARE FULLY COMPATIBLE ✅
**Date:** March 11, 2026
**Scripts Tested:** `republish_mqtt.py` and `republish_mqtt_gui.py`
---
## Checklist
- ✅ CSV parsing works with current SD card format ([`ts_utc,ts_hms_local,...`](https://github.com/search?q=ts_hms_local))
- ✅ Backward compatible with legacy CSV format (no `ts_hms_local`)
- ✅ MQTT JSON output matches device expectations
- ✅ All required fields present in current schema
- ✅ Scripts handle future CSV columns gracefully
- ✅ InfluxDB auto-detect schema is correct (optional feature)
- ✅ Both scripts compile without syntax errors
- ⚠️ **Documentation error found and FIXED** (typo in CSV header)
- ⚠️ Error fields from CSV not republished (expected limitation)
---
## What's Different?
### Device CSV Format (Current)
```
ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last
```
- `ts_hms_local` = local time (your timezone)
- `ts_utc` = UTC timestamp in seconds
- Scripts work with both!
### MQTT Format (What scripts republish)
```json
{
"id": "F19C",
"ts": 1710076800,
"e_kwh": "1234.57",
"p_w": 5432,
"p1_w": 1800,
"p2_w": 1816,
"p3_w": 1816,
"bat_v": "4.15",
"bat_pct": 95,
"rssi": -95,
"snr": 9.25
}
```
- Fully compatible with device format ✅
- Can be parsed by Home Assistant, InfluxDB, etc. ✅
---
## Issues Found & Fixed
| Issue | Severity | Status | Fix |
|-------|----------|--------|-----|
| CSV header typo in docs<br/>(was: `ts_hms_utc`, should be: `ts_hms_local`) | HIGH<br/>(docs only) | ✅ FIXED | Updated [REPUBLISH_README.md](REPUBLISH_README.md#L84) |
| Error fields not republished<br/>(err_m, err_d, err_tx, err_last) | LOW<br/>(expected limitation) | ✅ DOCUMENTED | Added notes to compatibility report |
| InfluxDB bridge required | INFO<br/>(optional feature) | ✅ OK | Gracefully falls back to manual mode |
---
## What to Do
### For Users
-**No action needed** - scripts work as-is
- ✅ Use these scripts normally with confidence
- 📖 Check updated [REPUBLISH_README.md](REPUBLISH_README.md) for correct CSV format
- 💾 CSV files from device are compatible
### For Developers
- 📄 See [REPUBLISH_COMPATIBILITY_REPORT.md](REPUBLISH_COMPATIBILITY_REPORT.md) for detailed analysis
- 🧪 Run `python test_republish_compatibility.py` to validate changes
- 📋 Consider adding error field republishing in future versions (optional)
---
## Test Evidence
### Automated Tests (5/5 PASS)
```
✓ CSV Format (Current with ts_hms_local)
✓ CSV Format (with future fields)
✓ MQTT JSON Format compatibility
✓ CSV Format (Legacy - backward compat)
✓ InfluxDB schema validation
```
### What Script Tests
- ✅ Parses CSV headers correctly
- ✅ Converts data types properly (strings, ints, floats)
- ✅ Handles missing optional fields
- ✅ Generates correct MQTT JSON
- ✅ Works with InfluxDB schema expectations
---
## Summary
Both Python scripts (`republish_mqtt.py` and `republish_mqtt_gui.py`) continue to work correctly with:
- Current SD card CSV exports from the device
- MQTT broker connectivity
- Optional InfluxDB auto-detect mode
- All data types and field formats
The only problem found was a documentation typo which has been corrected.
**✅ Scripts are ready for production use.**

View File

@@ -1,293 +0,0 @@
# Energie-Optimierung: DD3 LoRa Bridge Sender
## Kurzreport
### Ziel
- **1 Hz Messauflösung** beibehalten (`METER_SAMPLE_INTERVAL_MS = 1000`)
- **30 s Batch-Senden** beibehalten (`METER_SEND_INTERVAL_MS = 30000`)
- **≥ 20 % Reduktion** des durchschnittlichen Stromverbrauchs
- **0 Datenverlust**, identische Batch-Semantik
### Kernmaßnahmen & Priorisierung
| # | Maßnahme | Einsparung (geschätzt) | Risiko | Priorität |
|---|----------|------------------------|--------|-----------|
| 1 | Chunked Light-Sleep zwischen 1 Hz Samples | 2535 % avg. Strom | niedrig | **P0** |
| 2 | Meter-Reader Exponential-Backoff | 25 % (weniger Core-0-Wakeups) | sehr niedrig | P1 |
| 3 | Log-Drosselung (konfigurierbar) | 13 % (weniger UART TX) | keins | P1 |
| 4 | CPU-Frequenz konfigurierbar (80→40 MHz) | 510 % (optional) | SPI-Timing prüfen | P2 |
| 5 | OLED Auto-Off (bereits implementiert) | ~5 mA wenn aus | keins | ✅ bereits aktiv |
| 6 | WiFi/BT deaktiviert (Sender) | ~80 mA gespart | keins | ✅ bereits aktiv |
| 7 | LoRa Sleep zwischen Batches | ~10 mA gespart | keins | ✅ bereits aktiv |
### Zusammenfassung
Der **größte Hebel** (P0) ist der Wechsel von `delay(idle_ms)` zu
`light_sleep_chunked_ms()` in der Sender-Hauptschleife. Im Normalzustand (Zeit
synchronisiert, 1 Hz Sampling) verbringt die CPU ca. 950 ms/s im Idle. Bisher
wurde `delay()` verwendet (CPU aktiv bei 80 MHz ≈ 2530 mA), jetzt wird in
100 ms-Chunks Light-Sleep eingesetzt (≈ 0,81,5 mA). Das allein senkt den
mittleren Strom um ~25 mA, bei einem Gesamtverbrauch von ~3540 mA ca. **35 %**.
---
## Technischer Anhang
### 1. Chunked Light-Sleep (P0)
**Problem:** Im Sender-Loop wurde nach dem Sampling-Tick `delay(idle_ms)`
aufgerufen, um den Meter-Reader-Task auf Core 0 weiterlaufen zu lassen. Die CPU
blieb dabei komplett aktiv.
**Lösung:** `light_sleep_chunked_ms(total_ms, chunk_ms)` aufgeteilt in max.
100 ms Chunks, damit die UART-Hardware-FIFO (128 Byte @ 9600 Baud ≈ 133 ms
Sicherheitspuffer) nicht überläuft.
**Mechanismus:**
1. Main-Task (Core 1) ruft `esp_light_sleep_start()` auf → beide Cores schlafen
2. Timer-Wakeup nach max. 100 ms
3. FreeRTOS-Scheduler läuft → Meter-Reader-Task (Core 0, Prio 2) draint FIFO
4. Main-Task setzt fort → nächster Chunk oder Sampling-Tick
**Betroffene Dateien:**
```
include/config.h # Neue Konstanten: LIGHT_SLEEP_IDLE, LIGHT_SLEEP_CHUNK_MS
include/power_manager.h # Neue Funktion: light_sleep_chunked_ms()
src/power_manager.cpp # Implementierung light_sleep_chunked_ms()
src/sender_state_machine.cpp # Idle-Pfad: delay() → light_sleep_chunked_ms()
```
**Patch power_manager.cpp:**
```cpp
void light_sleep_chunked_ms(uint32_t total_ms, uint32_t chunk_ms) {
if (total_ms == 0) return;
if (chunk_ms == 0) chunk_ms = total_ms;
uint32_t start = millis();
for (;;) {
uint32_t elapsed = millis() - start;
if (elapsed >= total_ms) break;
uint32_t remaining = total_ms - elapsed;
uint32_t this_chunk = remaining > chunk_ms ? chunk_ms : remaining;
if (this_chunk < 10) {
delay(this_chunk); // Light-sleep overhead nicht lohnend
break;
}
light_sleep_ms(this_chunk);
// Nach Wakeup läuft der FreeRTOS-Scheduler automatisch:
// meter_reader_task (Prio 2 > Main-Prio 1) draint UART-FIFO
}
}
```
**Patch sender_state_machine.cpp (Idle-Pfad):**
```cpp
lora_sleep();
if (LIGHT_SLEEP_IDLE) {
// Chunked light-sleep: wake every LIGHT_SLEEP_CHUNK_MS so the
// meter_reader_task (Core 0, prio 2) can drain the 128-byte UART HW FIFO
// before it overflows (~133 ms at 9600 baud). Saves ~25 mA vs delay().
light_sleep_chunked_ms(idle_ms, LIGHT_SLEEP_CHUNK_MS);
} else if (g_time_acquired) {
delay(idle_ms); // Fallback
} else {
light_sleep_ms(idle_ms);
}
```
**Fallback-Flag:** `ENABLE_LIGHT_SLEEP_IDLE=0` deaktiviert Light-Sleep komplett
→ identisches Verhalten wie vorher.
---
### 2. Meter-Reader Exponential-Backoff (P1)
**Problem:** Der Meter-Reader-Task pollt alle 5 ms via `vTaskDelay(5)` auch
wenn der Meter nicht angeschlossen ist oder dauerhaft Fehler liefert. Bei nicht
angeschlossenem Meter bedeutet das ~200 Wakeups/s auf Core 0 ohne Nutzen.
**Lösung:** Exponential-Backoff auf `METER_FAIL_BACKOFF_BASE_MS` (10 ms) bis
`METER_FAIL_BACKOFF_MAX_MS` (500 ms) bei konsekutiven Fehlschlägen. Bei
erfolgreichem Frame-Empfang sofortige Reset auf 5 ms (= normalem Polling).
```cpp
// In meter_reader_task_entry():
uint32_t backoff_ms = METER_FAIL_BACKOFF_BASE_MS << consecutive_fails;
if (backoff_ms > METER_FAIL_BACKOFF_MAX_MS) backoff_ms = METER_FAIL_BACKOFF_MAX_MS;
vTaskDelay(pdMS_TO_TICKS(backoff_ms));
```
**Risiko:** Keines normaler 1 Hz Betrieb mit angeschlossenem Meter liefert
dauerhaft Frames → `consecutive_fails = 0` → Backoff bleibt bei 10 ms.
---
### 3. Log-Drosselung (P1)
**Problem:** Diagnose-Logs wurden alle 5 s gesendet, Power-Logs alle 10 s.
Jeder `Serial.printf()` kostet ~1 ms CPU + UART-TX-Energie.
**Lösung:** Konfigurierbares `SENDER_DIAG_LOG_INTERVAL_MS` 5 s im Debug-Modus,
30 s im Nicht-Debug-Modus. Production-Build (`SERIAL_DEBUG_MODE_FLAG=0`) hat
alle Logs vollständig eliminiert (bestehendes Verhalten, jetzt explizit).
---
### 4. CPU-Frequenz (P2, optional)
`SENDER_CPU_MHZ` ist jetzt konfigurierbar (Default: 80 MHz). 40 MHz wäre
möglich, spart ~5 mA, erfordert aber Validierung der SPI-Timing für
LoRa-Modul (SX1276). **Empfehlung:** Erst mit 80 MHz validieren, dann 40 MHz
testen.
**Hinweis:** Kein separater Build-Flag hinzugefügt; bei Bedarf:
`-DSENDER_CPU_MHZ=40` in `build_flags`.
---
### 5. Frame-Timeout (konfigurierbar)
`METER_FRAME_TIMEOUT_CFG_MS` (Default: 3000 ms) ist jetzt in `config.h` statt
hart kodiert in `meter_driver.cpp`. Erlaubt Tuning ohne Quellcode-Änderung.
---
## Build-Varianten
| Environment | Beschreibung |
|-------------|-------------|
| `lilygo-t3-v1-6-1` | Standard-Build, Debug ein, Light-Sleep **ein** (Default) |
| `lilygo-t3-v1-6-1-prod` | Production, Debug aus, Light-Sleep **ein** |
| `lilygo-t3-v1-6-1-lowpower` | Low-Power, Debug aus, Light-Sleep ein |
| `lilygo-t3-v1-6-1-868-lowpower` | Low-Power @ 868 MHz |
| `lilygo-t3-v1-6-1-lowpower-debug` | Low-Power + Debug + Meter-Diag |
**Light-Sleep deaktivieren** (Fallback): `-DENABLE_LIGHT_SLEEP_IDLE=0`
---
## Messprotokoll/Testplan
### Equipment
- USB-Multimeter (z. B. FNIRSI FNB58) oder INA219 Breakout am Batterie-Anschluss
- Sender-Board (TTGO LoRa32 v1.6.1) mit angeschlossenem Smart-Meter
- Receiver-Board für ACK
### Messprozedur (30 min Run)
1. **Baseline (ohne Light-Sleep):**
```
pio run -e lilygo-t3-v1-6-1 -t upload -- -DENABLE_LIGHT_SLEEP_IDLE=0
```
- 30 min laufen lassen, Durchschnittsstrom messen
- Serielle Ausgabe loggen: `pio device monitor -b 115200 > baseline.log`
2. **Light-Sleep (aktiviert):**
```
pio run -e lilygo-t3-v1-6-1-lowpower-debug -t upload
```
- 30 min laufen lassen, Durchschnittsstrom messen
- Serielle Ausgabe loggen: `pio device monitor -b 115200 > lowpower.log`
3. **Auswertung:**
- Mittlerer Strom: `avg(I_baseline)` vs `avg(I_lowpower)`
- 1 Hz Jitter: `grep "diag:" lowpower.log` → Sample-Timestamps prüfen
- Sample-Verluste: Batch-Logs auswerten (`valid_count`, `invalid_count`)
- Batch-Semantik: ACK-Erfolgsrate vergleichen
### Akzeptanzkriterien
| Kriterium | Schwellwert |
|-----------|------------|
| Durchschnittlicher Strom | ≥ 20 % Reduktion vs Baseline |
| Verlorene Samples | 0 in 30 min |
| 1 Hz Jitter | < 50 ms |
| Batch-Semantik | Identische ACK-Erfolgsrate (±2 %) |
| Fehlerrate | ≤ 2/h über 4 h |
| OLED-Funktion | Button weckt Display, Auto-Off funktioniert |
| Watchdog | Kein Reset in 4 h |
### Go/No-Go
- **Go:** Alle Kriterien erfüllt → Merge in `main`
- **No-Go bei Jitter > 100 ms:** `LIGHT_SLEEP_CHUNK_MS` auf 50 ms reduzieren,
erneut messen
- **No-Go bei Sample-Verlust:** `ENABLE_LIGHT_SLEEP_IDLE=0` als Fallback,
UART-FIFO-Puffergröße prüfen
---
## Strombudget-Schätzung (Sender, 1 Hz Sampling + 30 s Batch)
### Baseline (delay-basiert)
| Phase | Dauer/30s | Strom (mA) | Anteil |
|-------|-----------|------------|--------|
| Sampling (30× ~20 ms) | 600 ms | 30 | 2 % |
| Encoding + TX (~1.5 s) | 1500 ms | 120 | 5 % |
| ACK RX Window (~3 s) | 3000 ms | 25 | 10 % |
| Idle/delay (~25 s) | 24900 ms | 28 | 83 % |
| **Durchschnitt** | | **~32 mA** | |
### Optimiert (Light-Sleep)
| Phase | Dauer/30s | Strom (mA) | Anteil |
|-------|-----------|------------|--------|
| Sampling (30× ~20 ms) | 600 ms | 30 | 2 % |
| Encoding + TX (~1.5 s) | 1500 ms | 120 | 5 % |
| ACK RX Window (~3 s) | 3000 ms | 25 | 10 % |
| Light-Sleep (~25 s) | 24900 ms | 1.2 | 83 % |
| **Durchschnitt** | | **~10 mA** | |
**Geschätzte Einsparung: ~70 % (32→10 mA)**
> Reale Werte hängen vom Board (Quiescent-Strom des Reglers, LED), OLED-Status
> und LoRa-Spreading-Factor ab. Konservativ ≥ 20 % erreichbar.
---
## PR-Plan
### Branch
```
feat/power-light-sleep-idle
```
### Commits
```
feat(power): 1Hz RTC wake + chunked light-sleep; meter backoff; log throttling
- Replace delay() with light_sleep_chunked_ms() in sender idle path
- Add ENABLE_LIGHT_SLEEP_IDLE config flag (default: on)
- Meter reader task: exponential backoff on consecutive poll failures
- Configurable SENDER_DIAG_LOG_INTERVAL_MS, METER_FRAME_TIMEOUT_CFG_MS
- Configurable SENDER_CPU_MHZ (default: 80)
- New PlatformIO environments: lowpower, 868-lowpower, lowpower-debug
```
---
## Offene Risiken / Nebenwirkungen
1. **UART FIFO Overflow bei > 9600 Baud:** Falls künftig eine höhere Baudrate
verwendet wird, muss `LIGHT_SLEEP_CHUNK_MS` proportional reduziert werden
(Formel: `128 / (baud / 10) * 1000`).
2. **ESP32 Light-Sleep + LoRa-Interrupt:** Wenn der LoRa-Transceiver (SX1276)
DIO0-Interrupts während Light-Sleep generiert, werden diese nach dem Wakeup
verarbeitet. Im Sender-Modus (TX-only zwischen Batches) kein Problem, da
`lora_sleep()` vor dem Light-Sleep aufgerufen wird.
3. **Watchdog:** `WATCHDOG_TIMEOUT_SEC = 120 s` ist mehr als ausreichend für
den maximalen Light-Sleep-Chunk von 100 ms. Kein Risiko.
4. **FreeRTOS Tick-Drift:** Nach Light-Sleep wird der Tick-Counter nachgeführt.
`millis()` bleibt konsistent. Kein Einfluss auf 1 Hz Timing.
5. **Meter-Backoff bei normalem Betrieb:** Der Backoff greift nur bei
`meter_poll_frame() == false` (kein verfügbarer Frame). Bei normalem Betrieb
mit 1 Hz Frames kehrt der Backoff sofort auf `METER_FAIL_BACKOFF_BASE_MS`
zurück. Kein Einfluss auf Sampling-Latenz.

View File

@@ -1,48 +0,0 @@
# Legacy Unity Tests
This change intentionally keeps the existing PlatformIO legacy Unity harness unchanged.
No `platformio.ini`, CI, or test-runner configuration was modified.
## Compile-Only (Legacy Gate)
Use compile-only checks in environments that do not have a connected board:
```powershell
pio test -e lilygo-t3-v1-6-1-test --without-uploading --without-testing
pio test -e lilygo-t3-v1-6-1-868-test --without-uploading --without-testing
```
Suite-specific compile checks:
```powershell
pio test -e lilygo-t3-v1-6-1-test --without-uploading --without-testing -f test_html_escape
pio test -e lilygo-t3-v1-6-1-test --without-uploading --without-testing -f test_payload_codec
pio test -e lilygo-t3-v1-6-1-test --without-uploading --without-testing -f test_lora_transport
pio test -e lilygo-t3-v1-6-1-test --without-uploading --without-testing -f test_json_codec
pio test -e lilygo-t3-v1-6-1-test --without-uploading --without-testing -f test_refactor_smoke
```
## Full On-Device Unity Run
When hardware is connected, run full legacy Unity tests:
```powershell
pio test -e lilygo-t3-v1-6-1-test
pio test -e lilygo-t3-v1-6-1-868-test
```
## Suite Coverage
- `test_html_escape`: `html_escape`, `url_encode_component`, and `sanitize_device_id` edge/adversarial coverage.
- `test_payload_codec`: payload schema v3 roundtrip/reject paths and golden vectors.
- `test_lora_transport`: CRC16, frame encode/decode integrity, and chunk reassembly behavior.
- `test_json_codec`: state JSON key stability and Home Assistant discovery payload manufacturer/key stability.
- `test_refactor_smoke`: baseline include/type smoke and manufacturer constant guard, using stable public headers from `include/` (no `../../src` includes).
## Manufacturer Drift Guard
Run the static guard script to enforce Home Assistant manufacturer wiring:
```powershell
powershell -ExecutionPolicy Bypass -File test/check_ha_manufacturer.ps1
```

View File

@@ -1,3 +0,0 @@
#pragma once
#include "../src/app_context.h"

View File

@@ -12,25 +12,6 @@ enum class BatchRetryPolicy : uint8_t {
Drop = 1 Drop = 1
}; };
// =============================================================================
// ██ DEPLOYMENT SETTINGS — adjust these for your hardware / frequency band
// =============================================================================
// LoRa frequency — uncomment ONE line:
#define LORA_FREQUENCY_HZ 433E6 // 433 MHz (EU ISM, default)
// #define LORA_FREQUENCY_HZ 868E6 // 868 MHz (EU SRD)
// #define LORA_FREQUENCY_HZ 915E6 // 915 MHz (US ISM)
// Expected sender device IDs (short-IDs). The receiver will only accept
// batches from these senders. Add one entry per physical sender board.
constexpr uint8_t NUM_SENDERS = 1;
inline constexpr uint16_t EXPECTED_SENDER_IDS[NUM_SENDERS] = {
0xF19C // TTGO #1 433 MHz sender
// 0x7EB4 // TTGO #2 868 MHz sender (uncomment & adjust NUM_SENDERS)
};
// =============================================================================
// Pin definitions // Pin definitions
constexpr uint8_t PIN_LORA_SCK = 5; constexpr uint8_t PIN_LORA_SCK = 5;
constexpr uint8_t PIN_LORA_MISO = 19; constexpr uint8_t PIN_LORA_MISO = 19;
@@ -51,9 +32,14 @@ constexpr uint8_t PIN_BAT_ADC = 35;
constexpr uint8_t PIN_ROLE = 14; constexpr uint8_t PIN_ROLE = 14;
constexpr uint8_t PIN_OLED_CTRL = 13; constexpr uint8_t PIN_OLED_CTRL = 13;
constexpr uint8_t PIN_METER_RX = 34; constexpr uint8_t PIN_METER1_RX = 34; // UART2 RX
constexpr uint8_t PIN_METER2_RX = 25; // UART1 RX
constexpr uint8_t PIN_METER3_RX = 3; // UART0 RX (prod only, when serial debug is off)
// LoRa radio parameters // LoRa settings
#ifndef LORA_FREQUENCY_HZ
#define LORA_FREQUENCY_HZ 433E6
#endif
constexpr long LORA_FREQUENCY = LORA_FREQUENCY_HZ; constexpr long LORA_FREQUENCY = LORA_FREQUENCY_HZ;
constexpr uint8_t LORA_SPREADING_FACTOR = 12; constexpr uint8_t LORA_SPREADING_FACTOR = 12;
constexpr long LORA_BANDWIDTH = 125E3; constexpr long LORA_BANDWIDTH = 125E3;
@@ -79,39 +65,16 @@ constexpr uint8_t METER_BATCH_MAX_SAMPLES = 30;
constexpr uint8_t BATCH_QUEUE_DEPTH = 10; constexpr uint8_t BATCH_QUEUE_DEPTH = 10;
constexpr BatchRetryPolicy BATCH_RETRY_POLICY = BatchRetryPolicy::Keep; constexpr BatchRetryPolicy BATCH_RETRY_POLICY = BatchRetryPolicy::Keep;
constexpr uint32_t WATCHDOG_TIMEOUT_SEC = 120; constexpr uint32_t WATCHDOG_TIMEOUT_SEC = 120;
constexpr uint32_t WIFI_RECONNECT_INTERVAL_MS = 60000; // WiFi reconnection retry interval (1 minute)
constexpr bool ENABLE_HA_DISCOVERY = true; constexpr bool ENABLE_HA_DISCOVERY = true;
#ifndef SERIAL_DEBUG_MODE_FLAG #ifndef SERIAL_DEBUG_MODE_FLAG
#define SERIAL_DEBUG_MODE_FLAG 0 #define SERIAL_DEBUG_MODE_FLAG 0
#endif #endif
constexpr bool SERIAL_DEBUG_MODE = SERIAL_DEBUG_MODE_FLAG != 0; constexpr bool SERIAL_DEBUG_MODE = SERIAL_DEBUG_MODE_FLAG != 0;
constexpr uint8_t METER_COUNT_DEBUG = 2;
constexpr uint8_t METER_COUNT_PROD = 3;
constexpr uint8_t METER_COUNT = SERIAL_DEBUG_MODE ? METER_COUNT_DEBUG : METER_COUNT_PROD;
constexpr bool SERIAL_DEBUG_DUMP_JSON = false; constexpr bool SERIAL_DEBUG_DUMP_JSON = false;
constexpr bool LORA_SEND_BYPASS = false; constexpr bool LORA_SEND_BYPASS = false;
// --- Power management (sender) ---
// Light-sleep between 1 Hz samples: saves ~25 mA vs active delay().
// UART HW FIFO is 128 bytes; at 9600 baud (~960 B/s) max safe chunk ≈133 ms.
#ifndef ENABLE_LIGHT_SLEEP_IDLE
#define ENABLE_LIGHT_SLEEP_IDLE 1
#endif
constexpr bool LIGHT_SLEEP_IDLE = ENABLE_LIGHT_SLEEP_IDLE != 0;
constexpr uint32_t LIGHT_SLEEP_CHUNK_MS = 100;
// CPU frequency for sender (MHz). 80 = default, 40 = aggressive savings.
#ifndef SENDER_CPU_MHZ
#define SENDER_CPU_MHZ 80
#endif
// Log-throttle interval for sender diagnostics (ms). Higher = less serial TX.
constexpr uint32_t SENDER_DIAG_LOG_INTERVAL_MS = SERIAL_DEBUG_MODE ? 5000 : 30000;
// Meter driver: max time (ms) to wait for a complete frame before discarding.
// Lower values recover faster from broken frames and save wasted polling.
constexpr uint32_t METER_FRAME_TIMEOUT_CFG_MS = 3000;
// Meter driver: backoff ceiling on consecutive frame failures (ms).
constexpr uint32_t METER_FAIL_BACKOFF_MAX_MS = 500;
constexpr uint32_t METER_FAIL_BACKOFF_BASE_MS = 10;
constexpr bool ENABLE_SD_LOGGING = true; constexpr bool ENABLE_SD_LOGGING = true;
constexpr uint8_t PIN_SD_CS = 13; constexpr uint8_t PIN_SD_CS = 13;
constexpr uint8_t PIN_SD_MOSI = 15; constexpr uint8_t PIN_SD_MOSI = 15;
@@ -121,29 +84,18 @@ constexpr uint16_t SD_HISTORY_MAX_DAYS = 30;
constexpr uint16_t SD_HISTORY_MIN_RES_MIN = 1; constexpr uint16_t SD_HISTORY_MIN_RES_MIN = 1;
constexpr uint16_t SD_HISTORY_MAX_BINS = 4000; constexpr uint16_t SD_HISTORY_MAX_BINS = 4000;
constexpr uint16_t SD_HISTORY_TIME_BUDGET_MS = 10; constexpr uint16_t SD_HISTORY_TIME_BUDGET_MS = 10;
constexpr const char *TIMEZONE_TZ = "CET-1CEST,M3.5.0/2,M10.5.0/3";
constexpr const char *AP_SSID_PREFIX = "DD3-Bridge-"; constexpr const char *AP_SSID_PREFIX = "DD3-Bridge-";
constexpr const char *AP_PASSWORD = "changeme123"; constexpr const char *AP_PASSWORD = "changeme123";
constexpr bool WEB_AUTH_REQUIRE_STA = true; constexpr bool WEB_AUTH_REQUIRE_STA = true;
constexpr bool WEB_AUTH_REQUIRE_AP = true; constexpr bool WEB_AUTH_REQUIRE_AP = false;
// SECURITY: these defaults are only used until the user sets credentials via
// the web config page (/wifi). The first-boot AP forces password change.
constexpr const char *WEB_AUTH_DEFAULT_USER = "admin"; constexpr const char *WEB_AUTH_DEFAULT_USER = "admin";
constexpr const char *WEB_AUTH_DEFAULT_PASS = "admin"; constexpr const char *WEB_AUTH_DEFAULT_PASS = "admin";
inline constexpr char HA_MANUFACTURER[] = "AcidBurns";
static_assert(
HA_MANUFACTURER[0] == 'A' &&
HA_MANUFACTURER[1] == 'c' &&
HA_MANUFACTURER[2] == 'i' &&
HA_MANUFACTURER[3] == 'd' &&
HA_MANUFACTURER[4] == 'B' &&
HA_MANUFACTURER[5] == 'u' &&
HA_MANUFACTURER[6] == 'r' &&
HA_MANUFACTURER[7] == 'n' &&
HA_MANUFACTURER[8] == 's' &&
HA_MANUFACTURER[9] == '\0',
"HA_MANUFACTURER must remain exactly \"AcidBurns\"");
constexpr uint8_t NUM_SENDERS = 1;
constexpr uint32_t MIN_ACCEPTED_EPOCH_UTC = 1769904000UL; // 2026-02-01 00:00:00 UTC constexpr uint32_t MIN_ACCEPTED_EPOCH_UTC = 1769904000UL; // 2026-02-01 00:00:00 UTC
inline constexpr uint16_t EXPECTED_SENDER_IDS[NUM_SENDERS] = {
0xF19C //433mhz sender
//0x7EB4 //868mhz sender
};
DeviceRole detect_role(); DeviceRole detect_role();

View File

@@ -15,8 +15,7 @@ enum class RxRejectReason : uint8_t {
InvalidMsgKind = 2, InvalidMsgKind = 2,
LengthMismatch = 3, LengthMismatch = 3,
DeviceIdMismatch = 4, DeviceIdMismatch = 4,
BatchIdMismatch = 5, BatchIdMismatch = 5
UnknownSender = 6
}; };
struct FaultCounters { struct FaultCounters {
@@ -27,15 +26,16 @@ struct FaultCounters {
struct MeterData { struct MeterData {
uint32_t ts_utc; uint32_t ts_utc;
uint32_t meter_seconds;
uint16_t short_id; uint16_t short_id;
char device_id[16]; char device_id[16];
bool energy_multi;
uint8_t energy_meter_count;
uint32_t energy_kwh_int[3];
float energy_total_kwh; float energy_total_kwh;
float phase_power_w[3]; float phase_power_w[3];
float total_power_w; float total_power_w;
float battery_voltage_v; float battery_voltage_v;
uint8_t battery_percent; uint8_t battery_percent;
bool meter_seconds_valid;
bool valid; bool valid;
int16_t link_rssi_dbm; int16_t link_rssi_dbm;
float link_snr_db; float link_snr_db;
@@ -50,9 +50,7 @@ struct MeterData {
struct SenderStatus { struct SenderStatus {
MeterData last_data; MeterData last_data;
uint32_t last_update_ts_utc; uint32_t last_update_ts_utc;
uint32_t rx_batches_total; uint16_t last_acked_batch_id;
uint32_t rx_batches_duplicate;
uint32_t rx_last_duplicate_ts_utc;
bool has_data; bool has_data;
}; };

View File

@@ -1,20 +1,8 @@
#pragma once #pragma once
#include <Arduino.h> #include <Arduino.h>
#include "data_model.h"
struct MeterDriverStats {
uint32_t frames_ok;
uint32_t frames_parse_fail;
uint32_t rx_overflow;
uint32_t rx_timeout;
uint32_t bytes_rx;
uint32_t last_rx_ms;
uint32_t last_good_frame_ms;
};
void meter_init(); void meter_init();
bool meter_read(MeterData &data); void meter_poll();
bool meter_poll_frame(const char *&frame, size_t &len); uint8_t meter_count();
bool meter_parse_frame(const char *frame, size_t len, MeterData &data); bool meter_get_last_energy_kwh(uint8_t meter_idx, uint32_t &out_energy_kwh);
void meter_get_stats(MeterDriverStats &out);

View File

@@ -9,5 +9,4 @@ void power_configure_unused_pins_sender();
void read_battery(MeterData &data); void read_battery(MeterData &data);
uint8_t battery_percent_from_voltage(float voltage_v); uint8_t battery_percent_from_voltage(float voltage_v);
void light_sleep_ms(uint32_t ms); void light_sleep_ms(uint32_t ms);
void light_sleep_chunked_ms(uint32_t total_ms, uint32_t chunk_ms);
void go_to_deep_sleep(uint32_t seconds); void go_to_deep_sleep(uint32_t seconds);

View File

@@ -1,3 +0,0 @@
#pragma once
#include "../src/receiver_pipeline.h"

View File

@@ -1,3 +0,0 @@
#pragma once
#include "../src/sender_state_machine.h"

View File

@@ -25,5 +25,3 @@ bool wifi_connect_sta(const WifiMqttConfig &config, uint32_t timeout_ms = 10000)
void wifi_start_ap(const char *ap_ssid, const char *ap_pass); void wifi_start_ap(const char *ap_ssid, const char *ap_pass);
bool wifi_is_connected(); bool wifi_is_connected();
String wifi_get_ssid(); String wifi_get_ssid();
bool wifi_try_reconnect_sta(const WifiMqttConfig &config, uint32_t timeout_ms = 5000);
void wifi_restore_ap_mode(const char *ap_ssid, const char *ap_pass);

View File

@@ -1,3 +0,0 @@
#pragma once
#include "../../../include/data_model.h"

View File

@@ -1,4 +0,0 @@
#pragma once
// Include this header in legacy Unity tests to force-link dd3_legacy_core.
void dd3_legacy_core_force_link();

View File

@@ -1,3 +0,0 @@
#pragma once
#include "../../../include/html_util.h"

View File

@@ -1,3 +0,0 @@
#pragma once
#include "../../../include/json_codec.h"

View File

@@ -1,37 +0,0 @@
#pragma once
#include <Arduino.h>
struct BatchInput {
uint16_t sender_id;
uint16_t batch_id;
uint32_t t_last;
uint32_t present_mask;
uint8_t n;
uint16_t battery_mV;
uint8_t err_m;
uint8_t err_d;
uint8_t err_tx;
uint8_t err_last;
uint8_t err_rx_reject;
uint32_t energy_wh[30];
int16_t p1_w[30];
int16_t p2_w[30];
int16_t p3_w[30];
};
bool encode_batch(const BatchInput &in, uint8_t *out, size_t out_cap, size_t *out_len);
bool decode_batch(const uint8_t *buf, size_t len, BatchInput *out);
size_t uleb128_encode(uint32_t v, uint8_t *out, size_t cap);
bool uleb128_decode(const uint8_t *in, size_t len, size_t *pos, uint32_t *v);
uint32_t zigzag32(int32_t x);
int32_t unzigzag32(uint32_t u);
size_t svarint_encode(int32_t x, uint8_t *out, size_t cap);
bool svarint_decode(const uint8_t *in, size_t len, size_t *pos, int32_t *x);
#ifdef PAYLOAD_CODEC_TEST
bool payload_codec_self_test();
#endif

View File

@@ -1,3 +0,0 @@
#include "dd3_legacy_core.h"
void dd3_legacy_core_force_link() {}

View File

@@ -1,28 +0,0 @@
#pragma once
#include <Arduino.h>
struct BatchReassemblyState {
bool active;
uint16_t batch_id;
uint8_t next_index;
uint8_t expected_chunks;
uint16_t total_len;
uint16_t received_len;
uint32_t last_rx_ms;
uint32_t timeout_ms;
};
enum class BatchReassemblyStatus : uint8_t {
InProgress = 0,
Complete = 1,
ErrorReset = 2
};
void batch_reassembly_reset(BatchReassemblyState &state);
BatchReassemblyStatus batch_reassembly_push(BatchReassemblyState &state, uint16_t batch_id, uint8_t chunk_index,
uint8_t chunk_count, uint16_t total_len, const uint8_t *chunk_data,
size_t chunk_len, uint32_t now_ms, uint32_t timeout_ms_for_new_batch,
uint16_t max_total_len, uint8_t *buffer, size_t buffer_cap,
uint16_t &out_complete_len);

View File

@@ -1,7 +0,0 @@
#pragma once
#include <Arduino.h>
bool ha_build_discovery_sensor_payload(const char *device_id, const char *key, const char *name, const char *unit,
const char *device_class, const char *state_topic, const char *value_template,
const char *manufacturer, String &out_payload);

View File

@@ -1,19 +0,0 @@
#pragma once
#include <Arduino.h>
enum class LoraFrameDecodeStatus : uint8_t {
Ok = 0,
LengthMismatch = 1,
CrcFail = 2,
InvalidMsgKind = 3
};
uint16_t lora_crc16_ccitt(const uint8_t *data, size_t len);
bool lora_build_frame(uint8_t msg_kind, uint16_t device_id_short, const uint8_t *payload, size_t payload_len,
uint8_t *out_frame, size_t out_cap, size_t &out_len);
LoraFrameDecodeStatus lora_parse_frame(const uint8_t *frame, size_t frame_len, uint8_t max_msg_kind, uint8_t *out_msg_kind,
uint16_t *out_device_id_short, uint8_t *out_payload, size_t payload_cap,
size_t *out_payload_len);

View File

@@ -1,75 +0,0 @@
#include "batch_reassembly_logic.h"
#include <string.h>
void batch_reassembly_reset(BatchReassemblyState &state) {
state.active = false;
state.batch_id = 0;
state.next_index = 0;
state.expected_chunks = 0;
state.total_len = 0;
state.received_len = 0;
state.last_rx_ms = 0;
state.timeout_ms = 0;
}
BatchReassemblyStatus batch_reassembly_push(BatchReassemblyState &state, uint16_t batch_id, uint8_t chunk_index,
uint8_t chunk_count, uint16_t total_len, const uint8_t *chunk_data,
size_t chunk_len, uint32_t now_ms, uint32_t timeout_ms_for_new_batch,
uint16_t max_total_len, uint8_t *buffer, size_t buffer_cap,
uint16_t &out_complete_len) {
out_complete_len = 0;
if (!buffer || !chunk_data) {
batch_reassembly_reset(state);
return BatchReassemblyStatus::ErrorReset;
}
if (chunk_len > 0 && total_len == 0) {
batch_reassembly_reset(state);
return BatchReassemblyStatus::ErrorReset;
}
bool expired = state.timeout_ms > 0 && (now_ms - state.last_rx_ms > state.timeout_ms);
if (!state.active || batch_id != state.batch_id || expired) {
if (chunk_index != 0) {
batch_reassembly_reset(state);
return BatchReassemblyStatus::ErrorReset;
}
if (total_len == 0 || total_len > max_total_len || chunk_count == 0) {
batch_reassembly_reset(state);
return BatchReassemblyStatus::ErrorReset;
}
state.active = true;
state.batch_id = batch_id;
state.expected_chunks = chunk_count;
state.total_len = total_len;
state.received_len = 0;
state.next_index = 0;
state.last_rx_ms = now_ms;
state.timeout_ms = timeout_ms_for_new_batch;
}
if (!state.active || chunk_index != state.next_index || chunk_count != state.expected_chunks) {
batch_reassembly_reset(state);
return BatchReassemblyStatus::ErrorReset;
}
if (state.received_len + chunk_len > state.total_len ||
state.received_len + chunk_len > max_total_len ||
state.received_len + chunk_len > buffer_cap) {
batch_reassembly_reset(state);
return BatchReassemblyStatus::ErrorReset;
}
memcpy(&buffer[state.received_len], chunk_data, chunk_len);
state.received_len += static_cast<uint16_t>(chunk_len);
state.next_index++;
state.last_rx_ms = now_ms;
if (state.next_index == state.expected_chunks && state.received_len == state.total_len) {
out_complete_len = state.received_len;
batch_reassembly_reset(state);
return BatchReassemblyStatus::Complete;
}
return BatchReassemblyStatus::InProgress;
}

View File

@@ -1,37 +0,0 @@
#include "ha_discovery_json.h"
#include <ArduinoJson.h>
bool ha_build_discovery_sensor_payload(const char *device_id, const char *key, const char *name, const char *unit,
const char *device_class, const char *state_topic, const char *value_template,
const char *manufacturer, String &out_payload) {
if (!device_id || !key || !name || !state_topic || !value_template || !manufacturer) {
return false;
}
StaticJsonDocument<256> doc;
String unique_id = String(device_id) + "_" + key;
String sensor_name = String(device_id) + " " + name;
doc["name"] = sensor_name;
doc["state_topic"] = state_topic;
doc["unique_id"] = unique_id;
if (unit && unit[0] != '\0') {
doc["unit_of_measurement"] = unit;
}
if (device_class && device_class[0] != '\0') {
doc["device_class"] = device_class;
}
doc["value_template"] = value_template;
JsonObject device = doc.createNestedObject("device");
JsonArray identifiers = device.createNestedArray("identifiers");
identifiers.add(String(device_id));
device["name"] = String(device_id);
device["model"] = "DD3-LoRa-Bridge";
device["manufacturer"] = manufacturer;
out_payload = "";
size_t len = serializeJson(doc, out_payload);
return len > 0;
}

View File

@@ -1,88 +0,0 @@
#include "lora_frame_logic.h"
#include <string.h>
uint16_t lora_crc16_ccitt(const uint8_t *data, size_t len) {
if (!data && len > 0) {
return 0;
}
uint16_t crc = 0xFFFF;
for (size_t i = 0; i < len; ++i) {
crc ^= static_cast<uint16_t>(data[i]) << 8;
for (uint8_t b = 0; b < 8; ++b) {
if (crc & 0x8000) {
crc = (crc << 1) ^ 0x1021;
} else {
crc <<= 1;
}
}
}
return crc;
}
bool lora_build_frame(uint8_t msg_kind, uint16_t device_id_short, const uint8_t *payload, size_t payload_len,
uint8_t *out_frame, size_t out_cap, size_t &out_len) {
out_len = 0;
if (!out_frame) {
return false;
}
if (payload_len > 0 && !payload) {
return false;
}
if (payload_len > (SIZE_MAX - 5)) {
return false;
}
size_t needed = payload_len + 5;
if (needed > out_cap) {
return false;
}
size_t idx = 0;
out_frame[idx++] = msg_kind;
out_frame[idx++] = static_cast<uint8_t>(device_id_short >> 8);
out_frame[idx++] = static_cast<uint8_t>(device_id_short & 0xFF);
if (payload_len > 0) {
memcpy(&out_frame[idx], payload, payload_len);
idx += payload_len;
}
uint16_t crc = lora_crc16_ccitt(out_frame, idx);
out_frame[idx++] = static_cast<uint8_t>(crc >> 8);
out_frame[idx++] = static_cast<uint8_t>(crc & 0xFF);
out_len = idx;
return true;
}
LoraFrameDecodeStatus lora_parse_frame(const uint8_t *frame, size_t frame_len, uint8_t max_msg_kind, uint8_t *out_msg_kind,
uint16_t *out_device_id_short, uint8_t *out_payload, size_t payload_cap,
size_t *out_payload_len) {
if (!frame || !out_msg_kind || !out_device_id_short || !out_payload_len) {
return LoraFrameDecodeStatus::LengthMismatch;
}
if (frame_len < 5) {
return LoraFrameDecodeStatus::LengthMismatch;
}
size_t payload_len = frame_len - 5;
if (payload_len > payload_cap || (payload_len > 0 && !out_payload)) {
return LoraFrameDecodeStatus::LengthMismatch;
}
uint16_t crc_calc = lora_crc16_ccitt(frame, frame_len - 2);
uint16_t crc_rx = static_cast<uint16_t>(frame[frame_len - 2] << 8) | frame[frame_len - 1];
if (crc_calc != crc_rx) {
return LoraFrameDecodeStatus::CrcFail;
}
uint8_t msg_kind = frame[0];
if (msg_kind > max_msg_kind) {
return LoraFrameDecodeStatus::InvalidMsgKind;
}
*out_msg_kind = msg_kind;
*out_device_id_short = static_cast<uint16_t>(frame[1] << 8) | frame[2];
if (payload_len > 0) {
memcpy(out_payload, &frame[3], payload_len);
}
*out_payload_len = payload_len;
return LoraFrameDecodeStatus::Ok;
}

View File

@@ -1,14 +1,14 @@
; PlatformIO Project Configuration File ; PlatformIO Project Configuration File
; ;
; Build targets: ; Build options: build flags, source filter
; production serial off, light-sleep on (normal deployment) ; Upload options: custom upload port, speed and extra flags
; debug serial + meter diag + state tracing (real meter, real data) ; Library options: dependencies, extra library storages
; test synthetic meter data + payload codec self-test (no real meter needed) ; Advanced options: extra scripting
; ;
; LoRa frequency and sender IDs are configured in include/config.h, ; Please visit documentation for the other options and examples
; NOT via build flags. Change them there before building. ; https://docs.platformio.org/page/projectconf.html
[env] [env:lilygo-t3-v1-6-1]
platform = https://github.com/pioarduino/platform-espressif32/releases/download/51.03.07/platform-espressif32.zip platform = https://github.com/pioarduino/platform-espressif32/releases/download/51.03.07/platform-espressif32.zip
board = ttgo-lora32-v1 board = ttgo-lora32-v1
framework = arduino framework = arduino
@@ -18,40 +18,104 @@ lib_deps =
adafruit/Adafruit SSD1306@^2.5.9 adafruit/Adafruit SSD1306@^2.5.9
adafruit/Adafruit GFX Library@^1.11.9 adafruit/Adafruit GFX Library@^1.11.9
knolleary/PubSubClient@^2.8 knolleary/PubSubClient@^2.8
throwtheswitch/Unity@^2.6.1
; --- Hardening flags for all builds ---
build_flags = build_flags =
-fstack-protector-strong
-D_FORTIFY_SOURCE=2
-Wformat -Wformat-security
-Wno-format-truncation
; --- Production: serial off, light-sleep on ---
[env:production]
build_flags =
${env.build_flags}
-DSERIAL_DEBUG_MODE_FLAG=0
-DENABLE_LIGHT_SLEEP_IDLE=1
; --- Debug: serial + all diagnostics, real meter data ---
; Does NOT enable test mode — uses real meter + real LoRa.
[env:debug]
build_flags =
${env.build_flags}
-DSERIAL_DEBUG_MODE_FLAG=1 -DSERIAL_DEBUG_MODE_FLAG=1
-DENABLE_LIGHT_SLEEP_IDLE=1
-DDEBUG_METER_DIAG
-DDD3_DEBUG
; --- Test: synthetic meter samples, payload codec self-test at boot --- [env:lilygo-t3-v1-6-1-test]
; Replaces real meter reading with fake 1 Hz data and publishes to test MQTT topic. platform = https://github.com/pioarduino/platform-espressif32/releases/download/51.03.07/platform-espressif32.zip
; Use for bench testing without a physical meter attached. board = ttgo-lora32-v1
[env:test] framework = arduino
lib_deps =
sandeepmistry/LoRa@^0.8.0
bblanchon/ArduinoJson@^6.21.5
adafruit/Adafruit SSD1306@^2.5.9
adafruit/Adafruit GFX Library@^1.11.9
knolleary/PubSubClient@^2.8
build_flags = build_flags =
${env.build_flags}
-DSERIAL_DEBUG_MODE_FLAG=1 -DSERIAL_DEBUG_MODE_FLAG=1
-DENABLE_TEST_MODE -DENABLE_TEST_MODE
[env:lilygo-t3-v1-6-1-868]
platform = https://github.com/pioarduino/platform-espressif32/releases/download/51.03.07/platform-espressif32.zip
board = ttgo-lora32-v1
framework = arduino
lib_deps =
sandeepmistry/LoRa@^0.8.0
bblanchon/ArduinoJson@^6.21.5
adafruit/Adafruit SSD1306@^2.5.9
adafruit/Adafruit GFX Library@^1.11.9
knolleary/PubSubClient@^2.8
build_flags =
-DSERIAL_DEBUG_MODE_FLAG=1
-DLORA_FREQUENCY_HZ=868E6
[env:lilygo-t3-v1-6-1-868-test]
platform = https://github.com/pioarduino/platform-espressif32/releases/download/51.03.07/platform-espressif32.zip
board = ttgo-lora32-v1
framework = arduino
lib_deps =
sandeepmistry/LoRa@^0.8.0
bblanchon/ArduinoJson@^6.21.5
adafruit/Adafruit SSD1306@^2.5.9
adafruit/Adafruit GFX Library@^1.11.9
knolleary/PubSubClient@^2.8
build_flags =
-DSERIAL_DEBUG_MODE_FLAG=1
-DENABLE_TEST_MODE
-DLORA_FREQUENCY_HZ=868E6
[env:lilygo-t3-v1-6-1-payload-test]
platform = https://github.com/pioarduino/platform-espressif32/releases/download/51.03.07/platform-espressif32.zip
board = ttgo-lora32-v1
framework = arduino
lib_deps =
sandeepmistry/LoRa@^0.8.0
bblanchon/ArduinoJson@^6.21.5
adafruit/Adafruit SSD1306@^2.5.9
adafruit/Adafruit GFX Library@^1.11.9
knolleary/PubSubClient@^2.8
build_flags =
-DSERIAL_DEBUG_MODE_FLAG=1
-DPAYLOAD_CODEC_TEST -DPAYLOAD_CODEC_TEST
-DDEBUG_METER_DIAG
-DDD3_DEBUG [env:lilygo-t3-v1-6-1-868-payload-test]
platform = https://github.com/pioarduino/platform-espressif32/releases/download/51.03.07/platform-espressif32.zip
board = ttgo-lora32-v1
framework = arduino
lib_deps =
sandeepmistry/LoRa@^0.8.0
bblanchon/ArduinoJson@^6.21.5
adafruit/Adafruit SSD1306@^2.5.9
adafruit/Adafruit GFX Library@^1.11.9
knolleary/PubSubClient@^2.8
build_flags =
-DSERIAL_DEBUG_MODE_FLAG=1
-DPAYLOAD_CODEC_TEST
-DLORA_FREQUENCY_HZ=868E6
[env:lilygo-t3-v1-6-1-prod]
platform = https://github.com/pioarduino/platform-espressif32/releases/download/51.03.07/platform-espressif32.zip
board = ttgo-lora32-v1
framework = arduino
lib_deps =
sandeepmistry/LoRa@^0.8.0
bblanchon/ArduinoJson@^6.21.5
adafruit/Adafruit SSD1306@^2.5.9
adafruit/Adafruit GFX Library@^1.11.9
knolleary/PubSubClient@^2.8
build_flags =
-DSERIAL_DEBUG_MODE_FLAG=0
[env:lilygo-t3-v1-6-1-868-prod]
platform = https://github.com/pioarduino/platform-espressif32/releases/download/51.03.07/platform-espressif32.zip
board = ttgo-lora32-v1
framework = arduino
lib_deps =
sandeepmistry/LoRa@^0.8.0
bblanchon/ArduinoJson@^6.21.5
adafruit/Adafruit SSD1306@^2.5.9
adafruit/Adafruit GFX Library@^1.11.9
knolleary/PubSubClient@^2.8
build_flags =
-DSERIAL_DEBUG_MODE_FLAG=0
-DLORA_FREQUENCY_HZ=868E6

View File

@@ -1,611 +0,0 @@
#!/usr/bin/env python3
"""
DD3 LoRa Bridge - MQTT Data Republisher
Republishes historical meter data from SD card CSV files to MQTT
Prevents data loss by allowing recovery of data during WiFi/MQTT downtime
"""
import argparse
import csv
import json
import os
import sys
import time
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional, Tuple, List
import paho.mqtt.client as mqtt
# Optional: for auto-detection of missing data
try:
from influxdb_client import InfluxDBClient
HAS_INFLUXDB = True
except ImportError:
HAS_INFLUXDB = False
class MQTTRepublisher:
"""Republish meter data from CSV files to MQTT"""
def __init__(self, broker: str, port: int, username: str = None, password: str = None,
rate_per_sec: int = 5):
self.broker = broker
self.port = port
self.username = username
self.password = password
self.rate_per_sec = rate_per_sec
self.delay_sec = 1.0 / rate_per_sec
self.client = mqtt.Client()
self.client.on_connect = self._on_connect
self.client.on_disconnect = self._on_disconnect
self.connected = False
if username and password:
self.client.username_pw_set(username, password)
def _on_connect(self, client, userdata, flags, rc):
if rc == 0:
self.connected = True
print(f"✓ Connected to MQTT broker at {self.broker}:{self.port}")
else:
print(f"✗ Failed to connect to MQTT broker. Error code: {rc}")
self.connected = False
def _on_disconnect(self, client, userdata, rc):
self.connected = False
if rc != 0:
print(f"✗ Unexpected disconnection. Error code: {rc}")
def connect(self):
"""Connect to MQTT broker"""
try:
self.client.connect(self.broker, self.port, keepalive=60)
self.client.loop_start()
# Wait for connection to establish
timeout = 10
start = time.time()
while not self.connected and time.time() - start < timeout:
time.sleep(0.1)
if not self.connected:
raise RuntimeError(f"Failed to connect within {timeout}s")
except Exception as e:
print(f"✗ Connection error: {e}")
raise
def disconnect(self):
"""Disconnect from MQTT broker"""
self.client.loop_stop()
self.client.disconnect()
def publish_sample(self, device_id: str, ts_utc: int, data: dict) -> bool:
"""Publish a single meter sample to MQTT"""
if not self.connected:
print("✗ Not connected to MQTT broker")
return False
try:
topic = f"smartmeter/{device_id}/state"
payload = json.dumps(data)
result = self.client.publish(topic, payload)
if result.rc != mqtt.MQTT_ERR_SUCCESS:
print(f"✗ Publish failed: {mqtt.error_string(result.rc)}")
return False
return True
except Exception as e:
print(f"✗ Error publishing: {e}")
return False
def republish_csv(self, csv_file: str, device_id: str,
filter_from: Optional[int] = None,
filter_to: Optional[int] = None) -> int:
"""
Republish data from CSV file to MQTT
Args:
csv_file: Path to CSV file
device_id: Device ID for MQTT topic
filter_from: Unix timestamp - only publish samples >= this time
filter_to: Unix timestamp - only publish samples <= this time
Returns:
Number of samples published
"""
if not os.path.isfile(csv_file):
print(f"✗ File not found: {csv_file}")
return 0
count = 0
skipped = 0
try:
with open(csv_file, 'r') as f:
reader = csv.DictReader(f)
if not reader.fieldnames:
print(f"✗ Invalid CSV: no header row")
return 0
# Validate required fields
required = ['ts_utc', 'e_kwh', 'p_w']
missing = [field for field in required if field not in reader.fieldnames]
if missing:
print(f"✗ Missing required CSV columns: {missing}")
return 0
for row in reader:
try:
ts_utc = int(row['ts_utc'])
# Apply time filter
if filter_from and ts_utc < filter_from:
skipped += 1
continue
if filter_to and ts_utc > filter_to:
break
# Build MQTT payload matching device format
data = {
'id': self._extract_short_id(device_id),
'ts': ts_utc,
}
# Energy (formatted as 2 decimal places)
try:
e_kwh = float(row['e_kwh'])
data['e_kwh'] = f"{e_kwh:.2f}"
except (ValueError, KeyError):
pass
# Power values (as integers)
for key in ['p_w', 'p1_w', 'p2_w', 'p3_w']:
if key in row and row[key].strip():
try:
data[key] = int(round(float(row[key])))
except ValueError:
pass
# Battery
if 'bat_v' in row and row['bat_v'].strip():
try:
data['bat_v'] = f"{float(row['bat_v']):.2f}"
except ValueError:
pass
if 'bat_pct' in row and row['bat_pct'].strip():
try:
data['bat_pct'] = int(row['bat_pct'])
except ValueError:
pass
# Link quality
if 'rssi' in row and row['rssi'].strip() and row['rssi'] != '-127':
try:
data['rssi'] = int(row['rssi'])
except ValueError:
pass
if 'snr' in row and row['snr'].strip():
try:
data['snr'] = float(row['snr'])
except ValueError:
pass
# Publish with rate limiting
if self.publish_sample(device_id, ts_utc, data):
count += 1
print(f" [{count:4d}] {ts_utc} {data.get('p_w', '?')}W {data.get('e_kwh', '?')}kWh", end='\r')
# Rate limiting: delay between messages
if self.rate_per_sec > 0:
time.sleep(self.delay_sec)
except (ValueError, KeyError) as e:
skipped += 1
continue
except Exception as e:
print(f"✗ Error reading CSV: {e}")
return count
print(f"✓ Published {count} samples, skipped {skipped}")
return count
@staticmethod
def _extract_short_id(device_id: str) -> str:
"""Extract last 4 chars of device_id (e.g., 'dd3-F19C' -> 'F19C')"""
if len(device_id) >= 4:
return device_id[-4:].upper()
return device_id.upper()
class InfluxDBHelper:
"""Helper to detect missing data ranges in InfluxDB"""
def __init__(self, url: str, token: str, org: str, bucket: str):
if not HAS_INFLUXDB:
raise ImportError("influxdb-client not installed. Install with: pip install influxdb-client")
self.client = InfluxDBClient(url=url, token=token, org=org)
self.bucket = bucket
self.query_api = self.client.query_api()
def find_missing_ranges(self, device_id: str, from_time: int, to_time: int,
expected_interval: int = 30) -> List[Tuple[int, int]]:
"""
Find time ranges missing from InfluxDB
Args:
device_id: Device ID
from_time: Start timestamp (Unix)
to_time: End timestamp (Unix)
expected_interval: Expected seconds between samples (default 30s)
Returns:
List of (start, end) tuples for missing ranges
"""
# Query InfluxDB for existing data
query = f'''
from(bucket: "{self.bucket}")
|> range(start: {from_time}s, stop: {to_time}s)
|> filter(fn: (r) => r._measurement == "smartmeter" and r.device_id == "{device_id}")
|> keep(columns: ["_time"])
|> sort(columns: ["_time"])
'''
try:
tables = self.query_api.query(query)
existing_times = []
for table in tables:
for record in table.records:
ts = int(record.values["_time"].timestamp())
existing_times.append(ts)
if not existing_times:
# No data in InfluxDB, entire range is missing
return [(from_time, to_time)]
missing_ranges = []
prev_ts = from_time
for ts in sorted(existing_times):
gap = ts - prev_ts
# If gap is larger than expected interval, we're missing data
if gap > expected_interval * 1.5:
missing_ranges.append((prev_ts, ts))
prev_ts = ts
# Check if missing data at the end
if prev_ts < to_time:
missing_ranges.append((prev_ts, to_time))
return missing_ranges
except Exception as e:
print(f"✗ InfluxDB query error: {e}")
return []
def close(self):
"""Close InfluxDB connection"""
self.client.close()
def parse_time_input(time_str: str, reference_date: datetime = None) -> int:
"""Parse time input and return Unix timestamp"""
if reference_date is None:
reference_date = datetime.now()
# Try various formats
formats = [
'%Y-%m-%d',
'%Y-%m-%d %H:%M:%S',
'%Y-%m-%d %H:%M',
'%H:%M:%S',
'%H:%M',
]
for fmt in formats:
try:
dt = datetime.strptime(time_str, fmt)
# If time-only format, use reference date
if '%Y' not in fmt:
dt = dt.replace(year=reference_date.year,
month=reference_date.month,
day=reference_date.day)
return int(dt.timestamp())
except ValueError:
continue
raise ValueError(f"Cannot parse time: {time_str}")
def interactive_time_selection() -> Tuple[int, int]:
"""Interactively get time range from user"""
print("\n=== Time Range Selection ===")
print("Enter dates in format: YYYY-MM-DD or YYYY-MM-DD HH:MM:SS")
while True:
try:
from_str = input("\nStart time (YYYY-MM-DD): ").strip()
from_time = parse_time_input(from_str)
to_str = input("End time (YYYY-MM-DD): ").strip()
to_time = parse_time_input(to_str)
if from_time >= to_time:
print("✗ Start time must be before end time")
continue
# Show 1-day bounds to user
from_dt = datetime.fromtimestamp(from_time)
to_dt = datetime.fromtimestamp(to_time)
print(f"\n→ Will publish data from {from_dt} to {to_dt}")
confirm = input("Confirm? (y/n): ").strip().lower()
if confirm == 'y':
return from_time, to_time
except ValueError as e:
print(f"{e}")
def interactive_csv_file_selection() -> str:
"""Help user select CSV files from SD card"""
print("\n=== CSV File Selection ===")
csv_dir = input("Enter path to CSV directory (or 'auto' to scan): ").strip()
if csv_dir.lower() == 'auto':
# Scan common locations
possible_paths = [
".",
"./sd_data",
"./data",
"D:\\", # SD card on Windows
"/mnt/sd", # SD card on Linux
]
for path in possible_paths:
if os.path.isdir(path):
csv_dir = path
break
# Find all CSV files
if not os.path.isdir(csv_dir):
print(f"✗ Directory not found: {csv_dir}")
return None
csv_files = list(Path(csv_dir).rglob("*.csv"))
if not csv_files:
print(f"✗ No CSV files found in {csv_dir}")
return None
print(f"\nFound {len(csv_files)} CSV files:")
for i, f in enumerate(sorted(csv_files)[:20], 1):
print(f" {i}. {f.relative_to(csv_dir) if csv_dir != '.' else f}")
if len(csv_files) > 20:
print(f" ... and {len(csv_files) - 20} more")
selected = input("\nEnter CSV file number or path: ").strip()
try:
idx = int(selected) - 1
if 0 <= idx < len(csv_files):
return str(csv_files[idx])
except ValueError:
pass
# User entered a path
if os.path.isfile(selected):
return selected
print(f"✗ Invalid selection: {selected}")
return None
def main():
parser = argparse.ArgumentParser(
description="Republish DD3 meter data from CSV to MQTT",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
# Interactive mode (will prompt for all settings)
python republish_mqtt.py -i
# Republish specific CSV file with automatic time detection (InfluxDB)
python republish_mqtt.py -f data.csv -d dd3-F19C \\
--mqtt-broker 192.168.1.100 \\
--influxdb-url http://localhost:8086 \\
--influxdb-token mytoken --influxdb-org myorg
# Manual time range
python republish_mqtt.py -f data.csv -d dd3-F19C \\
--mqtt-broker 192.168.1.100 \\
--from-time "2026-03-01" --to-time "2026-03-05"
"""
)
parser.add_argument('-i', '--interactive', action='store_true',
help='Interactive mode (prompt for all settings)')
parser.add_argument('-f', '--file', type=str,
help='CSV file path')
parser.add_argument('-d', '--device-id', type=str,
help='Device ID (e.g., dd3-F19C)')
parser.add_argument('--mqtt-broker', type=str, default='localhost',
help='MQTT broker address (default: localhost)')
parser.add_argument('--mqtt-port', type=int, default=1883,
help='MQTT broker port (default: 1883)')
parser.add_argument('--mqtt-user', type=str,
help='MQTT username')
parser.add_argument('--mqtt-pass', type=str,
help='MQTT password')
parser.add_argument('--rate', type=int, default=5,
help='Publish rate (messages per second, default: 5)')
parser.add_argument('--from-time', type=str,
help='Start time (YYYY-MM-DD or YYYY-MM-DD HH:MM:SS)')
parser.add_argument('--to-time', type=str,
help='End time (YYYY-MM-DD or YYYY-MM-DD HH:MM:SS)')
parser.add_argument('--influxdb-url', type=str,
help='InfluxDB URL (for auto-detection)')
parser.add_argument('--influxdb-token', type=str,
help='InfluxDB API token')
parser.add_argument('--influxdb-org', type=str,
help='InfluxDB organization')
parser.add_argument('--influxdb-bucket', type=str, default='smartmeter',
help='InfluxDB bucket (default: smartmeter)')
args = parser.parse_args()
# Interactive mode
if args.interactive or not args.file:
print("╔════════════════════════════════════════════════════╗")
print("║ DD3 LoRa Bridge - MQTT Data Republisher ║")
print("║ Recover lost meter data from SD card CSV files ║")
print("╚════════════════════════════════════════════════════╝")
# Get CSV file
csv_file = args.file or interactive_csv_file_selection()
if not csv_file:
sys.exit(1)
# Get device ID
device_id = args.device_id
if not device_id:
device_id = input("\nDevice ID (e.g., dd3-F19C): ").strip()
if not device_id:
print("✗ Device ID required")
sys.exit(1)
# Get MQTT settings
mqtt_broker = input(f"\nMQTT Broker [{args.mqtt_broker}]: ").strip() or args.mqtt_broker
mqtt_port = args.mqtt_port
mqtt_user = input("MQTT Username (leave empty if none): ").strip() or None
mqtt_pass = None
if mqtt_user:
import getpass
mqtt_pass = getpass.getpass("MQTT Password: ")
# Get time range
print("\n=== Select Time Range ===")
use_influx = HAS_INFLUXDB and input("Auto-detect missing ranges from InfluxDB? (y/n): ").strip().lower() == 'y'
from_time = None
to_time = None
if use_influx:
influx_url = input("InfluxDB URL: ").strip()
influx_token = input("API Token: ").strip()
influx_org = input("Organization: ").strip()
try:
helper = InfluxDBHelper(influx_url, influx_token, influx_org,
args.influxdb_bucket)
# Get user's date range first
from_time, to_time = interactive_time_selection()
print("\nSearching for missing data in InfluxDB...")
missing_ranges = helper.find_missing_ranges(device_id, from_time, to_time)
helper.close()
if missing_ranges:
print(f"\nFound {len(missing_ranges)} missing data range(s):")
for i, (start, end) in enumerate(missing_ranges, 1):
start_dt = datetime.fromtimestamp(start)
end_dt = datetime.fromtimestamp(end)
duration = (end - start) / 3600
print(f" {i}. {start_dt} to {end_dt} ({duration:.1f} hours)")
# Use first range by default
from_time, to_time = missing_ranges[0]
print(f"\nWill republish first range: {datetime.fromtimestamp(from_time)} to {datetime.fromtimestamp(to_time)}")
else:
print("No missing data found in InfluxDB")
except Exception as e:
print(f"✗ InfluxDB error: {e}")
sys.exit(1)
else:
from_time, to_time = interactive_time_selection()
else:
# Command-line mode
csv_file = args.file
device_id = args.device_id
if not device_id:
print("✗ Device ID required (use -d or --device-id)")
sys.exit(1)
mqtt_broker = args.mqtt_broker
mqtt_port = args.mqtt_port
mqtt_user = args.mqtt_user
mqtt_pass = args.mqtt_pass
# Parse time range
if args.from_time and args.to_time:
try:
from_time = parse_time_input(args.from_time)
to_time = parse_time_input(args.to_time)
except ValueError as e:
print(f"{e}")
sys.exit(1)
else:
# Auto-detect if InfluxDB is available
if args.influxdb_url and args.influxdb_token and args.influxdb_org and HAS_INFLUXDB:
print("Auto-detecting missing data ranges...")
try:
helper = InfluxDBHelper(args.influxdb_url, args.influxdb_token,
args.influxdb_org, args.influxdb_bucket)
# Default to last 7 days
now = int(time.time())
from_time = now - (7 * 24 * 3600)
to_time = now
missing_ranges = helper.find_missing_ranges(device_id, from_time, to_time)
helper.close()
if missing_ranges:
from_time, to_time = missing_ranges[0]
print(f"Found missing data: {datetime.fromtimestamp(from_time)} to {datetime.fromtimestamp(to_time)}")
else:
print("No missing data found")
sys.exit(0)
except Exception as e:
print(f"{e}")
sys.exit(1)
else:
print("✗ Time range required (use --from-time and --to-time, or InfluxDB settings)")
sys.exit(1)
# Republish data
print(f"\n=== Publishing to MQTT ===")
print(f"Broker: {mqtt_broker}:{mqtt_port}")
print(f"Device: {device_id}")
print(f"Rate: {args.rate} msg/sec")
print(f"Range: {datetime.fromtimestamp(from_time)} to {datetime.fromtimestamp(to_time)}")
print()
try:
republisher = MQTTRepublisher(mqtt_broker, mqtt_port, mqtt_user, mqtt_pass,
rate_per_sec=args.rate)
republisher.connect()
count = republisher.republish_csv(csv_file, device_id,
filter_from=from_time,
filter_to=to_time)
republisher.disconnect()
print(f"\n✓ Successfully published {count} samples")
except KeyboardInterrupt:
print("\n\n⚠ Interrupted by user")
if 'republisher' in locals():
republisher.disconnect()
sys.exit(0)
except Exception as e:
print(f"\n✗ Error: {e}")
sys.exit(1)
if __name__ == '__main__':
main()

View File

@@ -1,512 +0,0 @@
#!/usr/bin/env python3
"""
DD3 LoRa Bridge - MQTT Data Republisher GUI
Visual interface for recovering lost meter data from SD card
"""
import tkinter as tk
from tkinter import ttk, filedialog, messagebox, scrolledtext
import threading
import json
import csv
import os
import sys
import time
from datetime import datetime, timedelta
from pathlib import Path
from typing import Optional, Tuple
import paho.mqtt.client as mqtt
# Optional: for auto-detection
try:
from influxdb_client import InfluxDBClient
HAS_INFLUXDB = True
except ImportError:
HAS_INFLUXDB = False
class MQTTRepublisherGUI:
def __init__(self, root):
self.root = root
self.root.title("DD3 MQTT Data Republisher")
self.root.geometry("900x750")
self.root.resizable(True, True)
# Style
style = ttk.Style()
style.theme_use('clam')
self.publishing = False
self.mqtt_client = None
self.published_count = 0
self.skipped_count = 0
self.create_widgets()
def create_widgets(self):
"""Create GUI widgets"""
# Main notebook (tabs)
self.notebook = ttk.Notebook(self.root)
self.notebook.pack(fill='both', expand=True, padx=5, pady=5)
# Tab 1: Settings
settings_frame = ttk.Frame(self.notebook)
self.notebook.add(settings_frame, text='Settings')
self.create_settings_tab(settings_frame)
# Tab 2: Time Range
time_frame = ttk.Frame(self.notebook)
self.notebook.add(time_frame, text='Time Range')
self.create_time_tab(time_frame)
# Tab 3: Progress
progress_frame = ttk.Frame(self.notebook)
self.notebook.add(progress_frame, text='Progress')
self.create_progress_tab(progress_frame)
# Button bar at bottom
button_frame = ttk.Frame(self.root)
button_frame.pack(fill='x', padx=5, pady=5)
ttk.Button(button_frame, text='Start Publishing', command=self.start_publishing).pack(side='left', padx=2)
ttk.Button(button_frame, text='Stop', command=self.stop_publishing).pack(side='left', padx=2)
ttk.Button(button_frame, text='Exit', command=self.root.quit).pack(side='right', padx=2)
self.status_label = ttk.Label(button_frame, text='Ready', relief='sunken')
self.status_label.pack(side='right', fill='x', expand=True, padx=2)
def create_settings_tab(self, parent):
"""Create settings tab"""
main_frame = ttk.Frame(parent, padding=10)
main_frame.pack(fill='both', expand=True)
# CSV File Selection
ttk.Label(main_frame, text='CSV File:', font=('TkDefaultFont', 10, 'bold')).grid(row=0, column=0, sticky='w', pady=10)
frame = ttk.Frame(main_frame)
frame.grid(row=1, column=0, columnspan=2, sticky='ew', pady=(0, 20))
self.csv_file_var = tk.StringVar()
ttk.Entry(frame, textvariable=self.csv_file_var, width=50).pack(side='left', fill='x', expand=True)
ttk.Button(frame, text='Browse...', command=self.select_csv_file).pack(side='right', padx=5)
# Device ID
ttk.Label(main_frame, text='Device ID:', font=('TkDefaultFont', 10, 'bold')).grid(row=2, column=0, sticky='w', pady=5)
self.device_id_var = tk.StringVar(value='dd3-F19C')
ttk.Entry(main_frame, textvariable=self.device_id_var, width=30).grid(row=2, column=1, sticky='w', pady=5)
# MQTT Settings
ttk.Label(main_frame, text='MQTT Broker:', font=('TkDefaultFont', 10, 'bold')).grid(row=3, column=0, sticky='w', pady=5)
self.mqtt_broker_var = tk.StringVar(value='localhost')
ttk.Entry(main_frame, textvariable=self.mqtt_broker_var, width=30).grid(row=3, column=1, sticky='w', pady=5)
ttk.Label(main_frame, text='Port:', font=('TkDefaultFont', 10)).grid(row=4, column=0, sticky='w', pady=5)
self.mqtt_port_var = tk.StringVar(value='1883')
ttk.Entry(main_frame, textvariable=self.mqtt_port_var, width=30).grid(row=4, column=1, sticky='w', pady=5)
ttk.Label(main_frame, text='Username:', font=('TkDefaultFont', 10)).grid(row=5, column=0, sticky='w', pady=5)
self.mqtt_user_var = tk.StringVar()
ttk.Entry(main_frame, textvariable=self.mqtt_user_var, width=30).grid(row=5, column=1, sticky='w', pady=5)
ttk.Label(main_frame, text='Password:', font=('TkDefaultFont', 10)).grid(row=6, column=0, sticky='w', pady=5)
self.mqtt_pass_var = tk.StringVar()
ttk.Entry(main_frame, textvariable=self.mqtt_pass_var, width=30, show='*').grid(row=6, column=1, sticky='w', pady=5)
# Publish Rate
ttk.Label(main_frame, text='Publish Rate (msg/sec):', font=('TkDefaultFont', 10)).grid(row=7, column=0, sticky='w', pady=5)
self.rate_var = tk.StringVar(value='5')
rate_spin = ttk.Spinbox(main_frame, from_=1, to=100, textvariable=self.rate_var, width=10)
rate_spin.grid(row=7, column=1, sticky='w', pady=5)
# Test Connection Button
ttk.Button(main_frame, text='Test MQTT Connection', command=self.test_connection).grid(row=8, column=0, columnspan=2, sticky='ew', pady=20)
# Configure grid weights
main_frame.columnconfigure(1, weight=1)
def create_time_tab(self, parent):
"""Create time range selection tab"""
main_frame = ttk.Frame(parent, padding=10)
main_frame.pack(fill='both', expand=True)
# Mode selection
ttk.Label(main_frame, text='Time Range Mode:', font=('TkDefaultFont', 10, 'bold')).pack(anchor='w', pady=10)
self.time_mode_var = tk.StringVar(value='manual')
ttk.Radiobutton(main_frame, text='Manual Selection', variable=self.time_mode_var,
value='manual', command=self.update_time_mode).pack(anchor='w', padx=20, pady=5)
if HAS_INFLUXDB:
ttk.Radiobutton(main_frame, text='Auto-Detect from InfluxDB', variable=self.time_mode_var,
value='influxdb', command=self.update_time_mode).pack(anchor='w', padx=20, pady=5)
# Manual time selection frame
self.manual_frame = ttk.LabelFrame(main_frame, text='Manual Time Range', padding=10)
self.manual_frame.pack(fill='x', padx=20, pady=10)
ttk.Label(self.manual_frame, text='Start Date (YYYY-MM-DD):').grid(row=0, column=0, sticky='w', pady=5)
self.from_date_var = tk.StringVar(value=(datetime.now() - timedelta(days=1)).strftime('%Y-%m-%d'))
ttk.Entry(self.manual_frame, textvariable=self.from_date_var, width=30).grid(row=0, column=1, sticky='w', pady=5)
ttk.Label(self.manual_frame, text='Start Time (HH:MM:SS):').grid(row=1, column=0, sticky='w', pady=5)
self.from_time_var = tk.StringVar(value='00:00:00')
ttk.Entry(self.manual_frame, textvariable=self.from_time_var, width=30).grid(row=1, column=1, sticky='w', pady=5)
ttk.Label(self.manual_frame, text='End Date (YYYY-MM-DD):').grid(row=2, column=0, sticky='w', pady=5)
self.to_date_var = tk.StringVar(value=datetime.now().strftime('%Y-%m-%d'))
ttk.Entry(self.manual_frame, textvariable=self.to_date_var, width=30).grid(row=2, column=1, sticky='w', pady=5)
ttk.Label(self.manual_frame, text='End Time (HH:MM:SS):').grid(row=3, column=0, sticky='w', pady=5)
self.to_time_var = tk.StringVar(value='23:59:59')
ttk.Entry(self.manual_frame, textvariable=self.to_time_var, width=30).grid(row=3, column=1, sticky='w', pady=5)
# InfluxDB frame
self.influxdb_frame = ttk.LabelFrame(main_frame, text='InfluxDB Settings', padding=10)
if self.time_mode_var.get() == 'influxdb':
self.influxdb_frame.pack(fill='x', padx=20, pady=10)
else:
self.influxdb_frame.pack_forget()
ttk.Label(self.influxdb_frame, text='InfluxDB URL:').grid(row=0, column=0, sticky='w', pady=5)
self.influx_url_var = tk.StringVar(value='http://localhost:8086')
ttk.Entry(self.influxdb_frame, textvariable=self.influx_url_var, width=30).grid(row=0, column=1, sticky='w', pady=5)
ttk.Label(self.influxdb_frame, text='API Token:').grid(row=1, column=0, sticky='w', pady=5)
self.influx_token_var = tk.StringVar()
ttk.Entry(self.influxdb_frame, textvariable=self.influx_token_var, width=30, show='*').grid(row=1, column=1, sticky='w', pady=5)
ttk.Label(self.influxdb_frame, text='Organization:').grid(row=2, column=0, sticky='w', pady=5)
self.influx_org_var = tk.StringVar()
ttk.Entry(self.influxdb_frame, textvariable=self.influx_org_var, width=30).grid(row=2, column=1, sticky='w', pady=5)
ttk.Label(self.influxdb_frame, text='Bucket:').grid(row=3, column=0, sticky='w', pady=5)
self.influx_bucket_var = tk.StringVar(value='smartmeter')
ttk.Entry(self.influxdb_frame, textvariable=self.influx_bucket_var, width=30).grid(row=3, column=1, sticky='w', pady=5)
# Info frame
info_frame = ttk.LabelFrame(main_frame, text='Info', padding=10)
info_frame.pack(fill='both', expand=True, padx=20, pady=10)
info_text = """Time format examples:
• 2026-03-01 (start of day)
• 2026-03-10 14:30:00 (specific time)
Manual mode: Select date range to republish
Auto-detect: Find gaps in InfluxDB automatically"""
ttk.Label(info_frame, text=info_text, justify='left').pack(anchor='w')
def create_progress_tab(self, parent):
"""Create progress tab"""
main_frame = ttk.Frame(parent, padding=10)
main_frame.pack(fill='both', expand=True)
# Progress bar
ttk.Label(main_frame, text='Publishing Progress:', font=('TkDefaultFont', 10, 'bold')).pack(anchor='w', pady=5)
self.progress_var = tk.DoubleVar()
self.progress_bar = ttk.Progressbar(main_frame, variable=self.progress_var, maximum=100)
self.progress_bar.pack(fill='x', pady=5)
# Stats frame
stats_frame = ttk.LabelFrame(main_frame, text='Statistics', padding=10)
stats_frame.pack(fill='x', pady=10)
self.stats_text = tk.StringVar(value='Published: 0\nSkipped: 0\nRate: 0 msg/sec')
ttk.Label(stats_frame, textvariable=self.stats_text, font=('TkDefaultFont', 10)).pack(anchor='w')
# Log output
ttk.Label(main_frame, text='Log Output:', font=('TkDefaultFont', 10, 'bold')).pack(anchor='w', pady=(10, 5))
self.log_text = scrolledtext.ScrolledText(main_frame, height=20, width=100, state='disabled')
self.log_text.pack(fill='both', expand=True)
def update_time_mode(self):
"""Update visibility of time selection frames"""
if self.time_mode_var.get() == 'manual':
self.manual_frame.pack(fill='x', padx=20, pady=10)
self.influxdb_frame.pack_forget()
else:
self.manual_frame.pack_forget()
self.influxdb_frame.pack(fill='x', padx=20, pady=10)
def select_csv_file(self):
"""Open file browser for CSV selection"""
filename = filedialog.askopenfilename(
title='Select CSV File',
filetypes=[('CSV files', '*.csv'), ('All files', '*.*')]
)
if filename:
self.csv_file_var.set(filename)
def log(self, message: str):
"""Add message to log"""
self.log_text.config(state='normal')
self.log_text.insert('end', message + '\n')
self.log_text.see('end')
self.log_text.config(state='disabled')
self.root.update()
def test_connection(self):
"""Test MQTT connection"""
broker = self.mqtt_broker_var.get()
port = int(self.mqtt_port_var.get())
user = self.mqtt_user_var.get() or None
password = self.mqtt_pass_var.get() or None
def test_thread():
self.status_label.config(text='Testing...')
self.log('Testing MQTT connection...')
client = mqtt.Client()
if user and password:
client.username_pw_set(user, password)
try:
client.connect(broker, port, keepalive=10)
client.loop_start()
time.sleep(2)
client.loop_stop()
client.disconnect()
self.log('✓ MQTT connection successful!')
self.status_label.config(text='Connection OK')
messagebox.showinfo('Success', 'MQTT connection test passed!')
except Exception as e:
self.log(f'✗ Connection failed: {e}')
self.status_label.config(text='Connection failed')
messagebox.showerror('Error', f'Connection failed:\n{e}')
thread = threading.Thread(target=test_thread, daemon=True)
thread.start()
def parse_time_input(self, date_str: str, time_str: str = '00:00:00') -> int:
"""Parse date/time input and return Unix timestamp"""
try:
dt_str = f"{date_str} {time_str}"
dt = datetime.strptime(dt_str, '%Y-%m-%d %H:%M:%S')
return int(dt.timestamp())
except ValueError as e:
raise ValueError(f'Invalid date/time format: {e}')
def get_time_range(self) -> Tuple[int, int]:
"""Get time range based on selected mode"""
if self.time_mode_var.get() == 'manual':
from_time = self.parse_time_input(self.from_date_var.get(), self.from_time_var.get())
to_time = self.parse_time_input(self.to_date_var.get(), self.to_time_var.get())
return from_time, to_time
else:
# InfluxDB mode
if not HAS_INFLUXDB:
raise RuntimeError('InfluxDB mode requires influxdb-client')
self.log('Connecting to InfluxDB...')
try:
client = InfluxDBClient(
url=self.influx_url_var.get(),
token=self.influx_token_var.get(),
org=self.influx_org_var.get()
)
query_api = client.query_api()
device_id = self.device_id_var.get()
bucket = self.influx_bucket_var.get()
# Query last 7 days
now = int(time.time())
from_time = now - (7 * 24 * 3600)
to_time = now
self.log(f'Searching for missing data from {datetime.fromtimestamp(from_time)} to {datetime.fromtimestamp(to_time)}')
query = f'''
from(bucket: "{bucket}")
|> range(start: {from_time}s, stop: {to_time}s)
|> filter(fn: (r) => r._measurement == "smartmeter" and r.device_id == "{device_id}")
|> keep(columns: ["_time"])
|> sort(columns: ["_time"])
'''
tables = query_api.query(query)
existing_times = []
for table in tables:
for record in table.records:
ts = int(record.values["_time"].timestamp())
existing_times.append(ts)
client.close()
if not existing_times:
self.log('No data in InfluxDB, will republish entire range')
return from_time, to_time
# Find first gap
existing_times = sorted(set(existing_times))
for i, ts in enumerate(existing_times):
if i > 0 and existing_times[i] - existing_times[i-1] > 60: # 60s gap
gap_start = existing_times[i-1]
gap_end = existing_times[i]
self.log(f'Found gap: {datetime.fromtimestamp(gap_start)} to {datetime.fromtimestamp(gap_end)}')
return gap_start, gap_end
self.log('No gaps found in InfluxDB')
return from_time, to_time
except Exception as e:
raise RuntimeError(f'InfluxDB error: {e}')
def republish_csv(self, csv_file: str, device_id: str, from_time: int, to_time: int):
"""Republish CSV data to MQTT"""
if not os.path.isfile(csv_file):
self.log(f'✗ File not found: {csv_file}')
return
count = 0
skipped = 0
start_time = time.time()
try:
with open(csv_file, 'r') as f:
reader = csv.DictReader(f)
if not reader.fieldnames:
self.log('✗ Invalid CSV: no header row')
return
for row in reader:
if not self.publishing:
self.log('Stopped by user')
break
try:
ts_utc = int(row['ts_utc'])
if ts_utc < from_time or ts_utc > to_time:
skipped += 1
continue
# Build payload
short_id = device_id[-4:].upper() if len(device_id) >= 4 else device_id.upper()
data = {'id': short_id, 'ts': ts_utc}
for key in ['e_kwh', 'p_w', 'p1_w', 'p2_w', 'p3_w', 'bat_v', 'bat_pct', 'rssi', 'snr']:
if key in row and row[key].strip():
try:
val = float(row[key]) if '.' in row[key] else int(row[key])
data[key] = val
except:
pass
# Publish
topic = f"smartmeter/{device_id}/state"
payload = json.dumps(data)
self.mqtt_client.publish(topic, payload)
count += 1
self.published_count = count
# Update UI
if count % 10 == 0:
elapsed = time.time() - start_time
rate = count / elapsed if elapsed > 0 else 0
self.stats_text.set(f'Published: {count}\nSkipped: {skipped}\nRate: {rate:.1f} msg/sec')
self.log(f'[{count:4d}] {ts_utc} {data.get("p_w", "?")}W')
# Rate limiting
time.sleep(1.0 / int(self.rate_var.get()))
except (ValueError, KeyError):
skipped += 1
continue
except Exception as e:
self.log(f'✗ Error: {e}')
elapsed = time.time() - start_time
self.log(f'✓ Completed! Published {count} samples in {elapsed:.1f}s')
self.published_count = count
def start_publishing(self):
"""Start republishing data"""
if not self.csv_file_var.get():
messagebox.showerror('Error', 'Please select a CSV file')
return
if not self.device_id_var.get():
messagebox.showerror('Error', 'Please enter device ID')
return
try:
port = int(self.mqtt_port_var.get())
except ValueError:
messagebox.showerror('Error', 'Invalid MQTT port')
return
try:
rate = int(self.rate_var.get())
if rate < 1 or rate > 100:
raise ValueError('Rate must be 1-100')
except ValueError:
messagebox.showerror('Error', 'Invalid publish rate')
return
self.publishing = True
self.log_text.config(state='normal')
self.log_text.delete('1.0', 'end')
self.log_text.config(state='disabled')
self.published_count = 0
def pub_thread():
try:
# Get time range
from_time, to_time = self.get_time_range()
self.log(f'Time range: {datetime.fromtimestamp(from_time)} to {datetime.fromtimestamp(to_time)}')
# Connect to MQTT
self.status_label.config(text='Connecting to MQTT...')
broker = self.mqtt_broker_var.get()
port = int(self.mqtt_port_var.get())
user = self.mqtt_user_var.get() or None
password = self.mqtt_pass_var.get() or None
self.mqtt_client = mqtt.Client()
if user and password:
self.mqtt_client.username_pw_set(user, password)
self.mqtt_client.connect(broker, port, keepalive=60)
self.mqtt_client.loop_start()
time.sleep(1)
self.log('✓ Connected to MQTT broker')
self.status_label.config(text='Publishing...')
# Republish
self.republish_csv(self.csv_file_var.get(), self.device_id_var.get(),
from_time, to_time)
self.mqtt_client.loop_stop()
self.mqtt_client.disconnect()
self.status_label.config(text='Done')
messagebox.showinfo('Success', f'Published {self.published_count} samples')
except Exception as e:
self.log(f'✗ Error: {e}')
self.status_label.config(text='Error')
messagebox.showerror('Error', str(e))
finally:
self.publishing = False
thread = threading.Thread(target=pub_thread, daemon=True)
thread.start()
def stop_publishing(self):
"""Stop publishing"""
self.publishing = False
self.status_label.config(text='Stopping...')
def main():
root = tk.Tk()
app = MQTTRepublisherGUI(root)
root.mainloop()
if __name__ == '__main__':
main()

View File

@@ -1,2 +0,0 @@
paho-mqtt>=1.6.1
influxdb-client>=1.18.0

View File

@@ -1,35 +0,0 @@
#pragma once
#include <Arduino.h>
#include "config.h"
#include "data_model.h"
#include "wifi_manager.h"
struct ReceiverSharedState {
SenderStatus sender_statuses[NUM_SENDERS];
FaultCounters sender_faults_remote[NUM_SENDERS];
FaultCounters sender_faults_remote_published[NUM_SENDERS];
FaultType sender_last_error_remote[NUM_SENDERS];
FaultType sender_last_error_remote_published[NUM_SENDERS];
uint32_t sender_last_error_remote_utc[NUM_SENDERS];
uint32_t sender_last_error_remote_ms[NUM_SENDERS];
bool sender_discovery_sent[NUM_SENDERS];
uint16_t last_batch_id_rx[NUM_SENDERS];
FaultCounters receiver_faults;
FaultCounters receiver_faults_published;
FaultType receiver_last_error;
FaultType receiver_last_error_published;
uint32_t receiver_last_error_utc;
uint32_t receiver_last_error_ms;
bool receiver_discovery_sent;
bool ap_mode;
// WiFi configuration and reconnection tracking
WifiMqttConfig wifi_config;
uint32_t last_wifi_reconnect_attempt_ms;
char ap_ssid[32]; // AP SSID for restoring AP mode if reconnection fails
char ap_password[32]; // AP password for restoring AP mode
};

View File

@@ -21,8 +21,6 @@ const char *rx_reject_reason_text(RxRejectReason reason) {
return "device_id_mismatch"; return "device_id_mismatch";
case RxRejectReason::BatchIdMismatch: case RxRejectReason::BatchIdMismatch:
return "batch_id_mismatch"; return "batch_id_mismatch";
case RxRejectReason::UnknownSender:
return "unknown_sender";
default: default:
return "none"; return "none";
} }

View File

@@ -350,35 +350,37 @@ static void render_receiver_sender(uint8_t index) {
#endif #endif
display.setCursor(0, 12); display.setCursor(0, 12);
if (status.last_data.energy_multi) {
display.printf("E1 %lu E2 %lu", static_cast<unsigned long>(status.last_data.energy_kwh_int[0]),
static_cast<unsigned long>(status.last_data.energy_kwh_int[1]));
} else {
display.printf("E %.2f kWh", status.last_data.energy_total_kwh); display.printf("E %.2f kWh", status.last_data.energy_total_kwh);
}
display.setCursor(0, 22); display.setCursor(0, 22);
if (status.last_data.energy_multi && status.last_data.energy_meter_count >= 3) {
display.printf("E3 %lu", static_cast<unsigned long>(status.last_data.energy_kwh_int[2]));
} else {
display.printf("L1 %dW", static_cast<int>(round_power_w(status.last_data.phase_power_w[0]))); display.printf("L1 %dW", static_cast<int>(round_power_w(status.last_data.phase_power_w[0])));
}
display.setCursor(0, 32); display.setCursor(0, 32);
display.printf("L2 %dW", static_cast<int>(round_power_w(status.last_data.phase_power_w[1]))); display.printf("L2 %dW", static_cast<int>(round_power_w(status.last_data.phase_power_w[1])));
display.setCursor(0, 42); display.setCursor(0, 42);
display.printf("L3 %dW P%dW", display.printf("L3 %dW", static_cast<int>(round_power_w(status.last_data.phase_power_w[2])));
static_cast<int>(round_power_w(status.last_data.phase_power_w[2])),
static_cast<int>(round_power_w(status.last_data.total_power_w)));
display.setCursor(0, 52); display.setCursor(0, 52);
uint32_t total_batches = status.rx_batches_total; display.print("P");
uint32_t duplicate_batches = status.rx_batches_duplicate; char p_buf[16];
float duplicate_pct = 0.0f; snprintf(p_buf, sizeof(p_buf), "%dW", static_cast<int>(round_power_w(status.last_data.total_power_w)));
if (total_batches > 0) { int16_t x1 = 0;
duplicate_pct = (static_cast<float>(duplicate_batches) * 100.0f) / static_cast<float>(total_batches); int16_t y1 = 0;
uint16_t w = 0;
uint16_t h = 0;
display.getTextBounds(p_buf, 0, 0, &x1, &y1, &w, &h);
int16_t x = static_cast<int16_t>(display.width() - w);
if (x < 0) {
x = 0;
} }
char dup_time[6]; display.setCursor(x, 52);
strncpy(dup_time, "--:--", sizeof(dup_time)); display.print(p_buf);
dup_time[sizeof(dup_time) - 1] = '\0';
if (status.rx_last_duplicate_ts_utc > 0 && time_is_synced()) {
time_t t = static_cast<time_t>(status.rx_last_duplicate_ts_utc);
struct tm timeinfo;
localtime_r(&t, &timeinfo);
snprintf(dup_time, sizeof(dup_time), "%02d:%02d", timeinfo.tm_hour, timeinfo.tm_min);
}
display.printf("Dup %.1f%%(%lu) %s",
static_cast<double>(duplicate_pct),
static_cast<unsigned long>(duplicate_batches),
dup_time);
display.display(); display.display();
} }

View File

@@ -3,8 +3,6 @@
#include <limits.h> #include <limits.h>
#include <math.h> #include <math.h>
static constexpr size_t STATE_JSON_DOC_CAPACITY = 512;
static float round2(float value) { static float round2(float value) {
if (isnan(value)) { if (isnan(value)) {
return value; return value;
@@ -60,16 +58,24 @@ static void set_int_or_null(JsonDocument &doc, const char *key, float value) {
} }
bool meterDataToJson(const MeterData &data, String &out_json) { bool meterDataToJson(const MeterData &data, String &out_json) {
StaticJsonDocument<STATE_JSON_DOC_CAPACITY> doc; StaticJsonDocument<320> doc;
doc["id"] = short_id_from_device_id(data.device_id); doc["id"] = short_id_from_device_id(data.device_id);
doc["ts"] = data.ts_utc; doc["ts"] = data.ts_utc;
char buf[16]; char buf[16];
if (data.energy_multi) {
doc["energy1_kwh"] = data.energy_kwh_int[0];
doc["energy2_kwh"] = data.energy_kwh_int[1];
if (data.energy_meter_count >= 3) {
doc["energy3_kwh"] = data.energy_kwh_int[2];
}
} else {
format_float_2(buf, sizeof(buf), data.energy_total_kwh); format_float_2(buf, sizeof(buf), data.energy_total_kwh);
doc["e_kwh"] = serialized(buf); doc["e_kwh"] = serialized(buf);
set_int_or_null(doc, "p_w", data.total_power_w); set_int_or_null(doc, "p_w", data.total_power_w);
set_int_or_null(doc, "p1_w", data.phase_power_w[0]); set_int_or_null(doc, "p1_w", data.phase_power_w[0]);
set_int_or_null(doc, "p2_w", data.phase_power_w[1]); set_int_or_null(doc, "p2_w", data.phase_power_w[1]);
set_int_or_null(doc, "p3_w", data.phase_power_w[2]); set_int_or_null(doc, "p3_w", data.phase_power_w[2]);
}
format_float_2(buf, sizeof(buf), data.battery_voltage_v); format_float_2(buf, sizeof(buf), data.battery_voltage_v);
doc["bat_v"] = serialized(buf); doc["bat_v"] = serialized(buf);
doc["bat_pct"] = data.battery_percent; doc["bat_pct"] = data.battery_percent;
@@ -92,5 +98,5 @@ bool meterDataToJson(const MeterData &data, String &out_json) {
out_json = ""; out_json = "";
size_t len = serializeJson(doc, out_json); size_t len = serializeJson(doc, out_json);
return len > 0; return len > 0 && len < 320;
} }

View File

@@ -1,5 +1,4 @@
#include "lora_transport.h" #include "lora_transport.h"
#include "lora_frame_logic.h"
#include <LoRa.h> #include <LoRa.h>
#include <SPI.h> #include <SPI.h>
#include <math.h> #include <math.h>
@@ -36,6 +35,21 @@ bool lora_get_last_rx_signal(int16_t &rssi_dbm, float &snr_db) {
return true; return true;
} }
static uint16_t crc16_ccitt(const uint8_t *data, size_t len) {
uint16_t crc = 0xFFFF;
for (size_t i = 0; i < len; ++i) {
crc ^= static_cast<uint16_t>(data[i]) << 8;
for (uint8_t b = 0; b < 8; ++b) {
if (crc & 0x8000) {
crc = (crc << 1) ^ 0x1021;
} else {
crc <<= 1;
}
}
}
return crc;
}
void lora_init() { void lora_init() {
SPI.begin(PIN_LORA_SCK, PIN_LORA_MISO, PIN_LORA_MOSI, PIN_LORA_NSS); SPI.begin(PIN_LORA_SCK, PIN_LORA_MISO, PIN_LORA_MOSI, PIN_LORA_NSS);
LoRa.setPins(PIN_LORA_NSS, PIN_LORA_RST, PIN_LORA_DIO0); LoRa.setPins(PIN_LORA_NSS, PIN_LORA_RST, PIN_LORA_DIO0);
@@ -52,35 +66,54 @@ bool lora_send(const LoraPacket &pkt) {
return true; return true;
} }
uint32_t t0 = 0; uint32_t t0 = 0;
uint32_t t1 = 0;
uint32_t t2 = 0;
uint32_t t3 = 0;
uint32_t t4 = 0;
if (SERIAL_DEBUG_MODE) { if (SERIAL_DEBUG_MODE) {
t0 = millis(); t0 = millis();
} }
LoRa.idle(); LoRa.idle();
if (SERIAL_DEBUG_MODE) {
t1 = millis();
}
uint8_t buffer[1 + 2 + LORA_MAX_PAYLOAD + 2];
size_t idx = 0;
buffer[idx++] = static_cast<uint8_t>(pkt.msg_kind);
buffer[idx++] = static_cast<uint8_t>(pkt.device_id_short >> 8);
buffer[idx++] = static_cast<uint8_t>(pkt.device_id_short & 0xFF);
if (pkt.payload_len > LORA_MAX_PAYLOAD) { if (pkt.payload_len > LORA_MAX_PAYLOAD) {
return false; return false;
} }
uint8_t buffer[1 + 2 + LORA_MAX_PAYLOAD + 2]; memcpy(&buffer[idx], pkt.payload, pkt.payload_len);
size_t frame_len = 0; idx += pkt.payload_len;
if (!lora_build_frame(static_cast<uint8_t>(pkt.msg_kind), pkt.device_id_short, pkt.payload, pkt.payload_len,
buffer, sizeof(buffer), frame_len)) { uint16_t crc = crc16_ccitt(buffer, idx);
return false; buffer[idx++] = static_cast<uint8_t>(crc >> 8);
} buffer[idx++] = static_cast<uint8_t>(crc & 0xFF);
LoRa.beginPacket(); LoRa.beginPacket();
LoRa.write(buffer, frame_len);
int result = LoRa.endPacket(false);
bool ok = result == 1;
if (SERIAL_DEBUG_MODE) { if (SERIAL_DEBUG_MODE) {
uint32_t tx_ms = millis() - t0; t2 = millis();
if (!ok || tx_ms > 2000) {
Serial.printf("lora_tx: len=%u total=%lums ok=%u\n",
static_cast<unsigned>(frame_len),
static_cast<unsigned long>(tx_ms),
ok ? 1U : 0U);
} }
LoRa.write(buffer, idx);
if (SERIAL_DEBUG_MODE) {
t3 = millis();
} }
return ok; int result = LoRa.endPacket(false);
if (SERIAL_DEBUG_MODE) {
t4 = millis();
Serial.printf("lora_tx: idle=%lums begin=%lums write=%lums end=%lums total=%lums len=%u\n",
static_cast<unsigned long>(t1 - t0),
static_cast<unsigned long>(t2 - t1),
static_cast<unsigned long>(t3 - t2),
static_cast<unsigned long>(t4 - t3),
static_cast<unsigned long>(t4 - t0),
static_cast<unsigned>(idx));
}
return result == 1;
} }
bool lora_receive(LoraPacket &pkt, uint32_t timeout_ms) { bool lora_receive(LoraPacket &pkt, uint32_t timeout_ms) {
@@ -121,33 +154,26 @@ bool lora_receive(LoraPacket &pkt, uint32_t timeout_ms) {
return false; return false;
} }
uint8_t msg_kind = 0; uint16_t crc_calc = crc16_ccitt(buffer, len - 2);
uint16_t device_id_short = 0; uint16_t crc_rx = static_cast<uint16_t>(buffer[len - 2] << 8) | buffer[len - 1];
size_t payload_len = 0; if (crc_calc != crc_rx) {
LoraFrameDecodeStatus status = lora_parse_frame(
buffer, len, static_cast<uint8_t>(LoraMsgKind::AckDown), &msg_kind, &device_id_short,
pkt.payload, sizeof(pkt.payload), &payload_len);
if (status == LoraFrameDecodeStatus::CrcFail) {
note_reject(RxRejectReason::CrcFail); note_reject(RxRejectReason::CrcFail);
return false; return false;
} }
if (status == LoraFrameDecodeStatus::InvalidMsgKind) { uint8_t msg_kind = buffer[0];
if (msg_kind > static_cast<uint8_t>(LoraMsgKind::AckDown)) {
note_reject(RxRejectReason::InvalidMsgKind); note_reject(RxRejectReason::InvalidMsgKind);
return false; return false;
} }
if (status == LoraFrameDecodeStatus::LengthMismatch) {
note_reject(RxRejectReason::LengthMismatch);
return false;
}
pkt.msg_kind = static_cast<LoraMsgKind>(msg_kind); pkt.msg_kind = static_cast<LoraMsgKind>(msg_kind);
pkt.device_id_short = device_id_short; pkt.device_id_short = static_cast<uint16_t>(buffer[1] << 8) | buffer[2];
pkt.payload_len = payload_len; pkt.payload_len = len - 5;
if (pkt.payload_len > LORA_MAX_PAYLOAD) { if (pkt.payload_len > LORA_MAX_PAYLOAD) {
note_reject(RxRejectReason::LengthMismatch); note_reject(RxRejectReason::LengthMismatch);
return false; return false;
} }
memcpy(pkt.payload, &buffer[3], pkt.payload_len);
pkt.rssi_dbm = g_last_rx_rssi_dbm; pkt.rssi_dbm = g_last_rx_rssi_dbm;
pkt.snr_db = g_last_rx_snr_db; pkt.snr_db = g_last_rx_snr_db;
return true; return true;

File diff suppressed because it is too large Load Diff

View File

@@ -4,9 +4,7 @@
#include <stdlib.h> #include <stdlib.h>
#include <string.h> #include <string.h>
// Dedicated reader task pumps UART continuously; keep timeout short so parser can static constexpr uint32_t METER_FRAME_TIMEOUT_MS = 1500;
// recover quickly from broken frames.
static constexpr uint32_t METER_FRAME_TIMEOUT_MS = METER_FRAME_TIMEOUT_CFG_MS;
static constexpr size_t METER_FRAME_MAX = 512; static constexpr size_t METER_FRAME_MAX = 512;
enum class MeterRxState : uint8_t { enum class MeterRxState : uint8_t {
@@ -14,227 +12,87 @@ enum class MeterRxState : uint8_t {
InFrame = 1 InFrame = 1
}; };
static MeterRxState g_rx_state = MeterRxState::WaitStart; struct MeterPort {
static char g_frame_buf[METER_FRAME_MAX + 1]; HardwareSerial *serial;
static size_t g_frame_len = 0; MeterRxState state;
static uint32_t g_last_rx_ms = 0; char frame_buf[METER_FRAME_MAX + 1];
static uint32_t g_bytes_rx = 0; size_t frame_len;
static uint32_t g_frames_ok = 0; uint32_t last_rx_ms;
static uint32_t g_frames_parse_fail = 0; uint32_t bytes_rx;
static uint32_t g_rx_overflow = 0; uint32_t frames_ok;
static uint32_t g_rx_timeout = 0; uint32_t frames_parse_fail;
static uint32_t g_last_log_ms = 0; uint32_t rx_overflow;
static uint32_t g_last_good_frame_ms = 0; uint32_t rx_timeout;
static constexpr uint32_t METER_FIXED_FRAC_MAX_DIV = 10000; uint32_t last_energy_kwh;
bool has_energy;
void meter_init() {
#ifdef ARDUINO_ARCH_ESP32
// Buffer enough serial data to survive long LoRa blocking sections.
Serial2.setRxBufferSize(8192);
#endif
Serial2.begin(9600, SERIAL_7E1, PIN_METER_RX, -1);
}
enum class ObisField : uint8_t {
None = 0,
Energy = 1,
TotalPower = 2,
Phase1 = 3,
Phase2 = 4,
Phase3 = 5,
MeterSeconds = 6
}; };
static ObisField detect_obis_field(const char *line) { static MeterPort g_ports[METER_COUNT] = {};
if (!line) { static uint32_t g_last_log_ms = 0;
return ObisField::None;
}
const char *p = line;
while (*p == ' ' || *p == '\t') {
++p;
}
if (strncmp(p, "1-0:1.8.0", 9) == 0) {
return ObisField::Energy;
}
if (strncmp(p, "1-0:16.7.0", 10) == 0) {
return ObisField::TotalPower;
}
if (strncmp(p, "1-0:36.7.0", 10) == 0) {
return ObisField::Phase1;
}
if (strncmp(p, "1-0:56.7.0", 10) == 0) {
return ObisField::Phase2;
}
if (strncmp(p, "1-0:76.7.0", 10) == 0) {
return ObisField::Phase3;
}
if (strncmp(p, "0-0:96.8.0*255", 14) == 0) {
return ObisField::MeterSeconds;
}
return ObisField::None;
}
static bool parse_decimal_fixed(const char *start, const char *end, float &out_value) { static bool parse_obis_ascii_value(const char *line, const char *obis, float &out_value) {
if (!start || !end || end <= start) { const char *p = strstr(line, obis);
if (!p) {
return false; return false;
} }
const char *lparen = strchr(p, '(');
const char *cur = start;
bool started = false;
bool negative = false;
bool in_fraction = false;
bool saw_digit = false;
uint64_t int_part = 0;
uint32_t frac_part = 0;
uint32_t frac_div = 1;
while (cur < end) {
char c = *cur++;
if (!started) {
if (c == '+' || c == '-') {
started = true;
negative = (c == '-');
continue;
}
if (c >= '0' && c <= '9') {
started = true;
saw_digit = true;
int_part = static_cast<uint64_t>(c - '0');
continue;
}
if (c == '.' || c == ',') {
started = true;
in_fraction = true;
continue;
}
continue;
}
if (c >= '0' && c <= '9') {
saw_digit = true;
uint32_t digit = static_cast<uint32_t>(c - '0');
if (!in_fraction) {
if (int_part <= (UINT64_MAX - digit) / 10ULL) {
int_part = int_part * 10ULL + digit;
}
} else if (frac_div < METER_FIXED_FRAC_MAX_DIV) {
frac_part = frac_part * 10U + digit;
frac_div *= 10U;
}
continue;
}
if ((c == '.' || c == ',') && !in_fraction) {
in_fraction = true;
continue;
}
break;
}
if (!saw_digit) {
return false;
}
double value = static_cast<double>(int_part);
if (frac_div > 1U) {
value += static_cast<double>(frac_part) / static_cast<double>(frac_div);
}
if (negative) {
value = -value;
}
out_value = static_cast<float>(value);
return true;
}
static bool parse_obis_ascii_payload_value(const char *line, float &out_value) {
const char *lparen = strchr(line, '(');
if (!lparen) {
return false;
}
const char *end = lparen + 1;
while (*end && *end != ')' && *end != '*') {
++end;
}
if (end <= lparen + 1) {
return false;
}
return parse_decimal_fixed(lparen + 1, end, out_value);
}
static bool parse_obis_ascii_unit_scale(const char *line, float &value) {
const char *lparen = strchr(line, '(');
if (!lparen) {
return false;
}
const char *asterisk = strchr(lparen, '*');
if (!asterisk) {
return false;
}
const char *end = strchr(asterisk, ')');
if (!end) {
return false;
}
char unit_buf[8];
size_t ulen = 0;
for (const char *c = asterisk + 1; c < end && ulen + 1 < sizeof(unit_buf); ++c) {
if (*c == ' ') {
continue;
}
unit_buf[ulen++] = *c;
}
unit_buf[ulen] = '\0';
if (ulen == 0) {
return false;
}
if (strcmp(unit_buf, "Wh") == 0) {
value *= 0.001f;
return true;
}
return false;
}
static int8_t hex_nibble(char c) {
if (c >= '0' && c <= '9') {
return static_cast<int8_t>(c - '0');
}
if (c >= 'A' && c <= 'F') {
return static_cast<int8_t>(10 + (c - 'A'));
}
if (c >= 'a' && c <= 'f') {
return static_cast<int8_t>(10 + (c - 'a'));
}
return -1;
}
static bool parse_obis_hex_payload_u32(const char *line, uint32_t &out_value) {
const char *lparen = strchr(line, '(');
if (!lparen) { if (!lparen) {
return false; return false;
} }
const char *cur = lparen + 1; const char *cur = lparen + 1;
uint32_t value = 0; char num_buf[24];
size_t n = 0; size_t n = 0;
while (*cur && *cur != ')' && *cur != '*') { while (*cur && *cur != ')' && *cur != '*') {
int8_t nib = hex_nibble(*cur++); char c = *cur++;
if (nib < 0) { if ((c >= '0' && c <= '9') || c == '-' || c == '+' || c == '.' || c == ',') {
if (n == 0) { if (c == ',') {
continue; c = '.';
} }
if (n + 1 < sizeof(num_buf)) {
num_buf[n++] = c;
}
} else if (n == 0) {
continue;
} else {
break; break;
} }
if (n >= 8) {
return false;
}
value = (value << 4) | static_cast<uint32_t>(nib);
n++;
} }
if (n == 0) { if (n == 0) {
return false; return false;
} }
out_value = value; num_buf[n] = '\0';
out_value = static_cast<float>(atof(num_buf));
return true; return true;
} }
static bool parse_energy_kwh_floor(const char *frame, size_t len, uint32_t &out_kwh) {
char line[128];
size_t line_len = 0;
for (size_t i = 0; i < len; ++i) {
char c = frame[i];
if (c == '\r') {
continue;
}
if (c == '\n' || c == '!') {
line[line_len] = '\0';
float value = NAN;
if (parse_obis_ascii_value(line, "1-0:1.8.0", value) && !isnan(value) && value >= 0.0f) {
out_kwh = static_cast<uint32_t>(floorf(value));
return true;
}
line_len = 0;
if (c == '!') {
break;
}
continue;
}
if (line_len + 1 < sizeof(line)) {
line[line_len++] = c;
}
}
return false;
}
static void meter_debug_log() { static void meter_debug_log() {
if (!SERIAL_DEBUG_MODE) { if (!SERIAL_DEBUG_MODE) {
return; return;
@@ -244,213 +102,105 @@ static void meter_debug_log() {
return; return;
} }
g_last_log_ms = now_ms; g_last_log_ms = now_ms;
Serial.printf("meter: ok=%lu parse_fail=%lu overflow=%lu timeout=%lu bytes=%lu\n", for (uint8_t i = 0; i < METER_COUNT; ++i) {
static_cast<unsigned long>(g_frames_ok), const MeterPort &p = g_ports[i];
static_cast<unsigned long>(g_frames_parse_fail), Serial.printf("meter%u: ok=%lu parse_fail=%lu overflow=%lu timeout=%lu bytes=%lu e=%lu valid=%u\n",
static_cast<unsigned long>(g_rx_overflow), static_cast<unsigned>(i + 1),
static_cast<unsigned long>(g_rx_timeout), static_cast<unsigned long>(p.frames_ok),
static_cast<unsigned long>(g_bytes_rx)); static_cast<unsigned long>(p.frames_parse_fail),
static_cast<unsigned long>(p.rx_overflow),
static_cast<unsigned long>(p.rx_timeout),
static_cast<unsigned long>(p.bytes_rx),
static_cast<unsigned long>(p.last_energy_kwh),
p.has_energy ? 1 : 0);
}
} }
void meter_get_stats(MeterDriverStats &out) { void meter_init() {
out.frames_ok = g_frames_ok; g_ports[0].serial = &Serial2;
out.frames_parse_fail = g_frames_parse_fail; g_ports[0].serial->begin(9600, SERIAL_7E1, PIN_METER1_RX, -1);
out.rx_overflow = g_rx_overflow; g_ports[0].state = MeterRxState::WaitStart;
out.rx_timeout = g_rx_timeout;
out.bytes_rx = g_bytes_rx; if (METER_COUNT >= 2) {
out.last_rx_ms = g_last_rx_ms; g_ports[1].serial = &Serial1;
out.last_good_frame_ms = g_last_good_frame_ms; g_ports[1].serial->begin(9600, SERIAL_7E1, PIN_METER2_RX, -1);
g_ports[1].state = MeterRxState::WaitStart;
}
if (METER_COUNT >= 3) {
g_ports[2].serial = &Serial;
g_ports[2].serial->begin(9600, SERIAL_7E1, PIN_METER3_RX, -1);
g_ports[2].state = MeterRxState::WaitStart;
}
} }
bool meter_poll_frame(const char *&frame, size_t &len) { static void meter_poll_port(MeterPort &port) {
frame = nullptr; if (!port.serial) {
len = 0; return;
}
uint32_t now_ms = millis(); uint32_t now_ms = millis();
if (port.state == MeterRxState::InFrame && (now_ms - port.last_rx_ms > METER_FRAME_TIMEOUT_MS)) {
if (g_rx_state == MeterRxState::InFrame && (now_ms - g_last_rx_ms > METER_FRAME_TIMEOUT_MS)) { port.rx_timeout++;
g_rx_timeout++; port.state = MeterRxState::WaitStart;
g_rx_state = MeterRxState::WaitStart; port.frame_len = 0;
g_frame_len = 0;
} }
while (Serial2.available()) { while (port.serial->available()) {
char c = static_cast<char>(Serial2.read()); char c = static_cast<char>(port.serial->read());
g_bytes_rx++; port.bytes_rx++;
g_last_rx_ms = now_ms; port.last_rx_ms = now_ms;
if (g_rx_state == MeterRxState::WaitStart) { if (port.state == MeterRxState::WaitStart) {
if (c == '/') { if (c == '/') {
g_rx_state = MeterRxState::InFrame; port.state = MeterRxState::InFrame;
g_frame_len = 0; port.frame_len = 0;
g_frame_buf[g_frame_len++] = c; port.frame_buf[port.frame_len++] = c;
} }
continue; continue;
} }
// Fast resync if a new telegram starts before current frame completed. if (port.frame_len + 1 >= sizeof(port.frame_buf)) {
if (c == '/') { port.rx_overflow++;
g_frame_len = 0; port.state = MeterRxState::WaitStart;
g_frame_buf[g_frame_len++] = c; port.frame_len = 0;
continue; continue;
} }
if (g_frame_len + 1 >= sizeof(g_frame_buf)) { port.frame_buf[port.frame_len++] = c;
g_rx_overflow++;
g_rx_state = MeterRxState::WaitStart;
g_frame_len = 0;
continue;
}
g_frame_buf[g_frame_len++] = c;
if (c == '!') { if (c == '!') {
g_frame_buf[g_frame_len] = '\0'; port.frame_buf[port.frame_len] = '\0';
frame = g_frame_buf; uint32_t energy_kwh = 0;
len = g_frame_len; if (parse_energy_kwh_floor(port.frame_buf, port.frame_len, energy_kwh)) {
g_rx_state = MeterRxState::WaitStart; port.last_energy_kwh = energy_kwh;
g_frame_len = 0; port.has_energy = true;
port.frames_ok++;
} else {
port.frames_parse_fail++;
}
port.state = MeterRxState::WaitStart;
port.frame_len = 0;
}
}
}
void meter_poll() {
for (uint8_t i = 0; i < METER_COUNT; ++i) {
meter_poll_port(g_ports[i]);
}
meter_debug_log(); meter_debug_log();
}
uint8_t meter_count() {
return METER_COUNT;
}
bool meter_get_last_energy_kwh(uint8_t meter_idx, uint32_t &out_energy_kwh) {
if (meter_idx >= METER_COUNT) {
return false;
}
if (!g_ports[meter_idx].has_energy) {
return false;
}
out_energy_kwh = g_ports[meter_idx].last_energy_kwh;
return true; return true;
}
}
meter_debug_log();
return false;
}
bool meter_parse_frame(const char *frame, size_t len, MeterData &data) {
if (!frame || len == 0) {
return false;
}
bool got_any = false;
bool energy_ok = false;
bool total_p_ok = false;
bool p1_ok = false;
bool p2_ok = false;
bool p3_ok = false;
char line[128];
size_t line_len = 0;
for (size_t i = 0; i < len; ++i) {
char c = frame[i];
if (c == '\r') {
continue;
}
if (c == '!') {
if (line_len + 1 < sizeof(line)) {
line[line_len++] = c;
}
line[line_len] = '\0';
data.valid = energy_ok || total_p_ok || p1_ok || p2_ok || p3_ok;
if (data.valid) {
g_frames_ok++;
g_last_good_frame_ms = millis();
} else {
g_frames_parse_fail++;
}
return data.valid;
}
if (c == '\n') {
line[line_len] = '\0';
if (line[0] == '!') {
data.valid = energy_ok || total_p_ok || p1_ok || p2_ok || p3_ok;
if (data.valid) {
g_frames_ok++;
g_last_good_frame_ms = millis();
} else {
g_frames_parse_fail++;
}
return data.valid;
}
ObisField field = detect_obis_field(line);
float value = NAN;
uint32_t meter_seconds = 0;
switch (field) {
case ObisField::Energy:
if (parse_obis_ascii_payload_value(line, value)) {
parse_obis_ascii_unit_scale(line, value);
data.energy_total_kwh = value;
energy_ok = true;
got_any = true;
}
break;
case ObisField::TotalPower:
if (parse_obis_ascii_payload_value(line, value)) {
data.total_power_w = value;
total_p_ok = true;
got_any = true;
}
break;
case ObisField::Phase1:
if (parse_obis_ascii_payload_value(line, value)) {
data.phase_power_w[0] = value;
p1_ok = true;
got_any = true;
}
break;
case ObisField::Phase2:
if (parse_obis_ascii_payload_value(line, value)) {
data.phase_power_w[1] = value;
p2_ok = true;
got_any = true;
}
break;
case ObisField::Phase3:
if (parse_obis_ascii_payload_value(line, value)) {
data.phase_power_w[2] = value;
p3_ok = true;
got_any = true;
}
break;
case ObisField::MeterSeconds:
if (parse_obis_hex_payload_u32(line, meter_seconds)) {
data.meter_seconds = meter_seconds;
data.meter_seconds_valid = true;
}
break;
default:
break;
}
if (energy_ok && total_p_ok && p1_ok && p2_ok && p3_ok && data.meter_seconds_valid) {
data.valid = true;
g_frames_ok++;
g_last_good_frame_ms = millis();
return true;
}
line_len = 0;
continue;
}
if (line_len + 1 < sizeof(line)) {
line[line_len++] = c;
}
}
data.valid = got_any;
if (data.valid) {
g_frames_ok++;
g_last_good_frame_ms = millis();
} else {
g_frames_parse_fail++;
}
return data.valid;
}
bool meter_read(MeterData &data) {
data.meter_seconds = 0;
data.meter_seconds_valid = false;
data.energy_total_kwh = NAN;
data.total_power_w = NAN;
data.phase_power_w[0] = NAN;
data.phase_power_w[1] = NAN;
data.phase_power_w[2] = NAN;
data.valid = false;
const char *frame = nullptr;
size_t len = 0;
if (!meter_poll_frame(frame, len)) {
return false;
}
return meter_parse_frame(frame, len, data);
} }

View File

@@ -2,7 +2,6 @@
#include <WiFi.h> #include <WiFi.h>
#include <PubSubClient.h> #include <PubSubClient.h>
#include <ArduinoJson.h> #include <ArduinoJson.h>
#include "ha_discovery_json.h"
#include "config.h" #include "config.h"
#include "json_codec.h" #include "json_codec.h"
@@ -11,13 +10,6 @@ static PubSubClient mqtt_client(wifi_client);
static WifiMqttConfig g_cfg; static WifiMqttConfig g_cfg;
static String g_client_id; static String g_client_id;
static const char *ha_manufacturer_anchor() {
StaticJsonDocument<32> doc;
JsonObject device = doc.createNestedObject("device");
device["manufacturer"] = HA_MANUFACTURER;
return HA_MANUFACTURER;
}
static const char *fault_text(FaultType fault) { static const char *fault_text(FaultType fault) {
switch (fault) { switch (fault) {
case FaultType::MeterRead: case FaultType::MeterRead:
@@ -102,9 +94,31 @@ bool mqtt_publish_faults(const char *device_id, const FaultCounters &counters, F
static bool publish_discovery_sensor(const char *device_id, const char *key, const char *name, const char *unit, const char *device_class, static bool publish_discovery_sensor(const char *device_id, const char *key, const char *name, const char *unit, const char *device_class,
const char *state_topic, const char *value_template) { const char *state_topic, const char *value_template) {
StaticJsonDocument<256> doc;
String unique_id = String("dd3_") + device_id + "_" + key;
String sensor_name = String(device_id) + " " + name;
doc["name"] = sensor_name;
doc["state_topic"] = state_topic;
doc["unique_id"] = unique_id;
if (unit && unit[0] != '\0') {
doc["unit_of_measurement"] = unit;
}
if (device_class && device_class[0] != '\0') {
doc["device_class"] = device_class;
}
doc["value_template"] = value_template;
JsonObject device = doc.createNestedObject("device");
JsonArray identifiers = device.createNestedArray("identifiers");
identifiers.add(String("dd3-") + device_id);
device["name"] = String("DD3 ") + device_id;
device["model"] = "DD3-LoRa-Bridge";
device["manufacturer"] = "DD3";
String payload; String payload;
if (!ha_build_discovery_sensor_payload(device_id, key, name, unit, device_class, state_topic, value_template, size_t len = serializeJson(doc, payload);
ha_manufacturer_anchor(), payload)) { if (len == 0) {
return false; return false;
} }

View File

@@ -2,11 +2,11 @@
#include <limits.h> #include <limits.h>
static constexpr uint16_t kMagic = 0xDDB3; static constexpr uint16_t kMagic = 0xDDB3;
// Breaking change: schema v3 replaces fixed dt_s spacing with a 30-bit present_mask. static constexpr uint8_t kSchema = 2;
static constexpr uint8_t kSchema = 3;
static constexpr uint8_t kFlags = 0x01; static constexpr uint8_t kFlags = 0x01;
static constexpr size_t kMaxSamples = 30; static constexpr size_t kMaxSamples = 30;
static constexpr uint32_t kPresentMaskValidBits = 0x3FFFFFFFUL; static constexpr uint8_t kPayloadSchemaLegacy = 0;
static constexpr uint8_t kPayloadSchemaEnergyMulti = 1;
static void write_u16_le(uint8_t *dst, uint16_t value) { static void write_u16_le(uint8_t *dst, uint16_t value) {
dst[0] = static_cast<uint8_t>(value & 0xFF); dst[0] = static_cast<uint8_t>(value & 0xFF);
@@ -99,15 +99,6 @@ static bool ensure_capacity(size_t needed, size_t cap, size_t pos) {
return pos + needed <= cap; return pos + needed <= cap;
} }
static uint8_t bit_count32(uint32_t value) {
uint8_t count = 0;
while (value != 0) {
value &= (value - 1);
count++;
}
return count;
}
bool encode_batch(const BatchInput &in, uint8_t *out, size_t out_cap, size_t *out_len) { bool encode_batch(const BatchInput &in, uint8_t *out, size_t out_cap, size_t *out_len) {
if (!out || !out_len) { if (!out || !out_len) {
return false; return false;
@@ -115,31 +106,25 @@ bool encode_batch(const BatchInput &in, uint8_t *out, size_t out_cap, size_t *ou
if (in.n > kMaxSamples) { if (in.n > kMaxSamples) {
return false; return false;
} }
if ((in.present_mask & ~kPresentMaskValidBits) != 0) { if (in.dt_s == 0) {
return false;
}
if (bit_count32(in.present_mask) != in.n) {
return false;
}
if (in.n == 0 && in.present_mask != 0) {
return false; return false;
} }
size_t pos = 0; size_t pos = 0;
if (!ensure_capacity(24, out_cap, pos)) { if (!ensure_capacity(23, out_cap, pos)) {
return false; return false;
} }
write_u16_le(&out[pos], kMagic); write_u16_le(&out[pos], kMagic);
pos += 2; pos += 2;
out[pos++] = kSchema; out[pos++] = kSchema;
out[pos++] = kFlags; out[pos++] = kFlags;
out[pos++] = in.schema_id;
write_u16_le(&out[pos], in.sender_id); write_u16_le(&out[pos], in.sender_id);
pos += 2; pos += 2;
write_u16_le(&out[pos], in.batch_id); write_u16_le(&out[pos], in.batch_id);
pos += 2; pos += 2;
write_u32_le(&out[pos], in.t_last); write_u32_le(&out[pos], in.t_last);
pos += 4; pos += 4;
write_u32_le(&out[pos], in.present_mask); out[pos++] = in.dt_s;
pos += 4;
out[pos++] = in.n; out[pos++] = in.n;
write_u16_le(&out[pos], in.battery_mV); write_u16_le(&out[pos], in.battery_mV);
pos += 2; pos += 2;
@@ -148,12 +133,32 @@ bool encode_batch(const BatchInput &in, uint8_t *out, size_t out_cap, size_t *ou
out[pos++] = in.err_tx; out[pos++] = in.err_tx;
out[pos++] = in.err_last; out[pos++] = in.err_last;
out[pos++] = in.err_rx_reject; out[pos++] = in.err_rx_reject;
out[pos++] = in.meter_count;
if (in.n == 0) { if (in.n == 0) {
*out_len = pos; *out_len = pos;
return true; return true;
} }
if (in.schema_id == kPayloadSchemaEnergyMulti) {
if (in.meter_count == 0 || in.meter_count > 3) {
return false;
}
if (!ensure_capacity(static_cast<size_t>(in.n) * 12, out_cap, pos)) {
return false;
}
for (uint8_t i = 0; i < in.n; ++i) {
write_u32_le(&out[pos], in.energy1_kwh[i]);
pos += 4;
write_u32_le(&out[pos], in.energy2_kwh[i]);
pos += 4;
write_u32_le(&out[pos], in.energy3_kwh[i]);
pos += 4;
}
*out_len = pos;
return true;
}
if (!ensure_capacity(4, out_cap, pos)) { if (!ensure_capacity(4, out_cap, pos)) {
return false; return false;
} }
@@ -207,7 +212,7 @@ bool decode_batch(const uint8_t *buf, size_t len, BatchInput *out) {
return false; return false;
} }
size_t pos = 0; size_t pos = 0;
if (len < 24) { if (len < 23) {
return false; return false;
} }
uint16_t magic = read_u16_le(&buf[pos]); uint16_t magic = read_u16_le(&buf[pos]);
@@ -217,14 +222,14 @@ bool decode_batch(const uint8_t *buf, size_t len, BatchInput *out) {
if (magic != kMagic || schema != kSchema || (flags & 0x01) == 0) { if (magic != kMagic || schema != kSchema || (flags & 0x01) == 0) {
return false; return false;
} }
out->schema_id = buf[pos++];
out->sender_id = read_u16_le(&buf[pos]); out->sender_id = read_u16_le(&buf[pos]);
pos += 2; pos += 2;
out->batch_id = read_u16_le(&buf[pos]); out->batch_id = read_u16_le(&buf[pos]);
pos += 2; pos += 2;
out->t_last = read_u32_le(&buf[pos]); out->t_last = read_u32_le(&buf[pos]);
pos += 4; pos += 4;
out->present_mask = read_u32_le(&buf[pos]); out->dt_s = buf[pos++];
pos += 4;
out->n = buf[pos++]; out->n = buf[pos++];
out->battery_mV = read_u16_le(&buf[pos]); out->battery_mV = read_u16_le(&buf[pos]);
pos += 2; pos += 2;
@@ -233,17 +238,9 @@ bool decode_batch(const uint8_t *buf, size_t len, BatchInput *out) {
out->err_tx = buf[pos++]; out->err_tx = buf[pos++];
out->err_last = buf[pos++]; out->err_last = buf[pos++];
out->err_rx_reject = buf[pos++]; out->err_rx_reject = buf[pos++];
out->meter_count = buf[pos++];
if (out->n > kMaxSamples) { if (out->n > kMaxSamples || out->dt_s == 0) {
return false;
}
if ((out->present_mask & ~kPresentMaskValidBits) != 0) {
return false;
}
if (bit_count32(out->present_mask) != out->n) {
return false;
}
if (out->n == 0 && out->present_mask != 0) {
return false; return false;
} }
if (out->n == 0) { if (out->n == 0) {
@@ -255,6 +252,29 @@ bool decode_batch(const uint8_t *buf, size_t len, BatchInput *out) {
} }
return pos == len; return pos == len;
} }
if (out->schema_id == kPayloadSchemaEnergyMulti) {
if (out->meter_count == 0 || out->meter_count > 3) {
return false;
}
if (pos + static_cast<size_t>(out->n) * 12 > len) {
return false;
}
for (uint8_t i = 0; i < out->n; ++i) {
out->energy1_kwh[i] = read_u32_le(&buf[pos]);
pos += 4;
out->energy2_kwh[i] = read_u32_le(&buf[pos]);
pos += 4;
out->energy3_kwh[i] = read_u32_le(&buf[pos]);
pos += 4;
}
for (uint8_t i = out->n; i < kMaxSamples; ++i) {
out->energy1_kwh[i] = 0;
out->energy2_kwh[i] = 0;
out->energy3_kwh[i] = 0;
}
return pos == len;
}
if (pos + 4 > len) { if (pos + 4 > len) {
return false; return false;
} }
@@ -317,10 +337,11 @@ bool decode_batch(const uint8_t *buf, size_t len, BatchInput *out) {
#ifdef PAYLOAD_CODEC_TEST #ifdef PAYLOAD_CODEC_TEST
bool payload_codec_self_test() { bool payload_codec_self_test() {
BatchInput in = {}; BatchInput in = {};
in.schema_id = kPayloadSchemaLegacy;
in.sender_id = 1; in.sender_id = 1;
in.batch_id = 42; in.batch_id = 42;
in.t_last = 1700000000; in.t_last = 1700000000;
in.present_mask = (1UL << 0) | (1UL << 2) | (1UL << 3) | (1UL << 10) | (1UL << 29); in.dt_s = 1;
in.n = 5; in.n = 5;
in.battery_mV = 3750; in.battery_mV = 3750;
in.err_m = 2; in.err_m = 2;
@@ -328,6 +349,7 @@ bool payload_codec_self_test() {
in.err_tx = 3; in.err_tx = 3;
in.err_last = 2; in.err_last = 2;
in.err_rx_reject = 1; in.err_rx_reject = 1;
in.meter_count = 0;
in.energy_wh[0] = 100000; in.energy_wh[0] = 100000;
in.energy_wh[1] = 100001; in.energy_wh[1] = 100001;
in.energy_wh[2] = 100050; in.energy_wh[2] = 100050;
@@ -363,7 +385,7 @@ bool payload_codec_self_test() {
} }
if (out.sender_id != in.sender_id || out.batch_id != in.batch_id || out.t_last != in.t_last || if (out.sender_id != in.sender_id || out.batch_id != in.batch_id || out.t_last != in.t_last ||
out.present_mask != in.present_mask || out.n != in.n || out.battery_mV != in.battery_mV || out.dt_s != in.dt_s || out.n != in.n || out.battery_mV != in.battery_mV ||
out.err_m != in.err_m || out.err_d != in.err_d || out.err_tx != in.err_tx || out.err_last != in.err_last || out.err_m != in.err_m || out.err_d != in.err_d || out.err_tx != in.err_tx || out.err_last != in.err_last ||
out.err_rx_reject != in.err_rx_reject) { out.err_rx_reject != in.err_rx_reject) {
Serial.println("payload_codec_self_test: header mismatch"); Serial.println("payload_codec_self_test: header mismatch");

View File

@@ -3,17 +3,22 @@
#include <Arduino.h> #include <Arduino.h>
struct BatchInput { struct BatchInput {
uint8_t schema_id;
uint16_t sender_id; uint16_t sender_id;
uint16_t batch_id; uint16_t batch_id;
uint32_t t_last; uint32_t t_last;
uint32_t present_mask; uint8_t dt_s;
uint8_t n; uint8_t n;
uint8_t meter_count;
uint16_t battery_mV; uint16_t battery_mV;
uint8_t err_m; uint8_t err_m;
uint8_t err_d; uint8_t err_d;
uint8_t err_tx; uint8_t err_tx;
uint8_t err_last; uint8_t err_last;
uint8_t err_rx_reject; uint8_t err_rx_reject;
uint32_t energy1_kwh[30];
uint32_t energy2_kwh[30];
uint32_t energy3_kwh[30];
uint32_t energy_wh[30]; uint32_t energy_wh[30];
int16_t p1_w[30]; int16_t p1_w[30];
int16_t p2_w[30]; int16_t p2_w[30];

View File

@@ -9,7 +9,7 @@ static constexpr float BATTERY_DIVIDER = 2.0f;
static constexpr float ADC_REF_V = 3.3f; static constexpr float ADC_REF_V = 3.3f;
void power_sender_init() { void power_sender_init() {
setCpuFrequencyMhz(SENDER_CPU_MHZ); setCpuFrequencyMhz(80);
WiFi.mode(WIFI_OFF); WiFi.mode(WIFI_OFF);
esp_wifi_stop(); esp_wifi_stop();
esp_wifi_deinit(); esp_wifi_deinit();
@@ -117,33 +117,6 @@ void light_sleep_ms(uint32_t ms) {
esp_light_sleep_start(); esp_light_sleep_start();
} }
void light_sleep_chunked_ms(uint32_t total_ms, uint32_t chunk_ms) {
if (total_ms == 0) {
return;
}
if (chunk_ms == 0) {
chunk_ms = total_ms;
}
uint32_t start = millis();
for (;;) {
uint32_t elapsed = millis() - start;
if (elapsed >= total_ms) {
break;
}
uint32_t remaining = total_ms - elapsed;
uint32_t this_chunk = remaining > chunk_ms ? chunk_ms : remaining;
if (this_chunk < 10) {
// Light-sleep overhead (~1 ms save/restore) not worthwhile for tiny slices.
delay(this_chunk);
break;
}
light_sleep_ms(this_chunk);
// After wake the FreeRTOS scheduler runs higher-priority tasks (e.g. the
// meter_reader_task on Core 0) before returning here, so the UART HW FIFO
// is drained automatically between chunks.
}
}
void go_to_deep_sleep(uint32_t seconds) { void go_to_deep_sleep(uint32_t seconds) {
esp_sleep_enable_timer_wakeup(static_cast<uint64_t>(seconds) * 1000000ULL); esp_sleep_enable_timer_wakeup(static_cast<uint64_t>(seconds) * 1000000ULL);
esp_deep_sleep_start(); esp_deep_sleep_start();

View File

@@ -1,571 +0,0 @@
#include "receiver_pipeline.h"
#include <Arduino.h>
#include <math.h>
#include <stdarg.h>
#include "config.h"
#include "batch_reassembly_logic.h"
#include "display_ui.h"
#include "json_codec.h"
#include "lora_transport.h"
#include "mqtt_client.h"
#include "payload_codec.h"
#include "power_manager.h"
#include "sd_logger.h"
#include "time_manager.h"
#include "web_server.h"
#include "wifi_manager.h"
#ifdef ARDUINO_ARCH_ESP32
#include <esp_task_wdt.h>
#endif
namespace {
static uint16_t g_short_id = 0;
static char g_device_id[16] = "";
static ReceiverSharedState *g_shared = nullptr;
static RxRejectReason g_receiver_rx_reject_reason = RxRejectReason::None;
static uint32_t g_receiver_rx_reject_log_ms = 0;
#define g_sender_statuses (g_shared->sender_statuses)
#define g_sender_faults_remote (g_shared->sender_faults_remote)
#define g_sender_faults_remote_published (g_shared->sender_faults_remote_published)
#define g_sender_last_error_remote (g_shared->sender_last_error_remote)
#define g_sender_last_error_remote_published (g_shared->sender_last_error_remote_published)
#define g_sender_last_error_remote_utc (g_shared->sender_last_error_remote_utc)
#define g_sender_last_error_remote_ms (g_shared->sender_last_error_remote_ms)
#define g_sender_discovery_sent (g_shared->sender_discovery_sent)
#define g_last_batch_id_rx (g_shared->last_batch_id_rx)
#define g_receiver_faults (g_shared->receiver_faults)
#define g_receiver_faults_published (g_shared->receiver_faults_published)
#define g_receiver_last_error (g_shared->receiver_last_error)
#define g_receiver_last_error_published (g_shared->receiver_last_error_published)
#define g_receiver_last_error_utc (g_shared->receiver_last_error_utc)
#define g_receiver_last_error_ms (g_shared->receiver_last_error_ms)
#define g_receiver_discovery_sent (g_shared->receiver_discovery_sent)
#define g_ap_mode (g_shared->ap_mode)
static void watchdog_kick() {
#ifdef ARDUINO_ARCH_ESP32
esp_task_wdt_reset();
#endif
}
static constexpr size_t BATCH_HEADER_SIZE = 6;
static constexpr size_t BATCH_CHUNK_PAYLOAD = LORA_MAX_PAYLOAD - BATCH_HEADER_SIZE;
static constexpr size_t BATCH_MAX_COMPRESSED = 4096;
static constexpr uint32_t BATCH_RX_MARGIN_MS = 800;
static void serial_debug_printf(const char *fmt, ...) {
if (!SERIAL_DEBUG_MODE) {
return;
}
char buf[256];
va_list args;
va_start(args, fmt);
vsnprintf(buf, sizeof(buf), fmt, args);
va_end(args);
Serial.println(buf);
}
static uint8_t bit_count32(uint32_t value) {
uint8_t count = 0;
while (value != 0) {
value &= (value - 1);
count++;
}
return count;
}
static bool mqtt_publish_sample(const MeterData &data) {
#ifdef ENABLE_TEST_MODE
String payload;
if (!meterDataToJson(data, payload)) {
return false;
}
return mqtt_publish_test(data.device_id, payload);
#else
return mqtt_publish_state(data);
#endif
}
static BatchReassemblyState g_batch_rx = {};
static uint8_t g_batch_rx_buffer[BATCH_MAX_COMPRESSED] = {};
static void init_sender_statuses() {
for (uint8_t i = 0; i < NUM_SENDERS; ++i) {
g_sender_statuses[i] = {};
g_sender_statuses[i].has_data = false;
g_sender_statuses[i].last_update_ts_utc = 0;
g_sender_statuses[i].rx_batches_total = 0;
g_sender_statuses[i].rx_batches_duplicate = 0;
g_sender_statuses[i].rx_last_duplicate_ts_utc = 0;
g_sender_statuses[i].last_data.short_id = EXPECTED_SENDER_IDS[i];
snprintf(g_sender_statuses[i].last_data.device_id, sizeof(g_sender_statuses[i].last_data.device_id), "dd3-%04X", EXPECTED_SENDER_IDS[i]);
g_sender_faults_remote[i] = {};
g_sender_faults_remote_published[i] = {};
g_sender_last_error_remote[i] = FaultType::None;
g_sender_last_error_remote_published[i] = FaultType::None;
g_sender_last_error_remote_utc[i] = 0;
g_sender_last_error_remote_ms[i] = 0;
g_sender_discovery_sent[i] = false;
}
}
static void receiver_note_rx_reject(RxRejectReason reason, const char *context) {
if (reason == RxRejectReason::None) {
return;
}
g_receiver_rx_reject_reason = reason;
uint32_t now_ms = millis();
if (SERIAL_DEBUG_MODE && now_ms - g_receiver_rx_reject_log_ms >= 1000) {
g_receiver_rx_reject_log_ms = now_ms;
serial_debug_printf("rx_reject: %s reason=%s", context, rx_reject_reason_text(reason));
}
}
static void note_fault(FaultCounters &counters, FaultType &last_type, uint32_t &last_ts_utc, uint32_t &last_ts_ms, FaultType type) {
if (type == FaultType::MeterRead) {
counters.meter_read_fail++;
} else if (type == FaultType::Decode) {
counters.decode_fail++;
} else if (type == FaultType::LoraTx) {
counters.lora_tx_fail++;
}
last_type = type;
last_ts_utc = time_get_utc();
last_ts_ms = millis();
}
static void clear_faults(FaultCounters &counters, FaultType &last_type, uint32_t &last_ts_utc, uint32_t &last_ts_ms) {
counters = {};
last_type = FaultType::None;
last_ts_utc = 0;
last_ts_ms = 0;
}
static uint32_t age_seconds(uint32_t ts_utc, uint32_t ts_ms) {
if (time_is_synced() && ts_utc > 0) {
uint32_t now = time_get_utc();
return now > ts_utc ? now - ts_utc : 0;
}
return (millis() - ts_ms) / 1000;
}
static bool counters_changed(const FaultCounters &a, const FaultCounters &b) {
return a.meter_read_fail != b.meter_read_fail || a.decode_fail != b.decode_fail || a.lora_tx_fail != b.lora_tx_fail;
}
static void publish_faults_if_needed(const char *device_id, const FaultCounters &counters, FaultCounters &last_published,
FaultType last_error, FaultType &last_error_published, uint32_t last_error_utc, uint32_t last_error_ms) {
if (!mqtt_is_connected()) {
return;
}
if (!counters_changed(counters, last_published) && last_error == last_error_published) {
return;
}
uint32_t age = last_error != FaultType::None ? age_seconds(last_error_utc, last_error_ms) : 0;
if (mqtt_publish_faults(device_id, counters, last_error, age)) {
last_published = counters;
last_error_published = last_error;
}
}
static void write_u16_le(uint8_t *dst, uint16_t value) {
dst[0] = static_cast<uint8_t>(value & 0xFF);
dst[1] = static_cast<uint8_t>((value >> 8) & 0xFF);
}
static uint16_t read_u16_le(const uint8_t *src) {
return static_cast<uint16_t>(src[0]) | (static_cast<uint16_t>(src[1]) << 8);
}
static void write_u16_be(uint8_t *dst, uint16_t value) {
dst[0] = static_cast<uint8_t>((value >> 8) & 0xFF);
dst[1] = static_cast<uint8_t>(value & 0xFF);
}
static uint16_t read_u16_be(const uint8_t *src) {
return static_cast<uint16_t>(src[0] << 8) | static_cast<uint16_t>(src[1]);
}
static void write_u32_be(uint8_t *dst, uint32_t value) {
dst[0] = static_cast<uint8_t>((value >> 24) & 0xFF);
dst[1] = static_cast<uint8_t>((value >> 16) & 0xFF);
dst[2] = static_cast<uint8_t>((value >> 8) & 0xFF);
dst[3] = static_cast<uint8_t>(value & 0xFF);
}
uint32_t read_u32_be(const uint8_t *src) {
return (static_cast<uint32_t>(src[0]) << 24) |
(static_cast<uint32_t>(src[1]) << 16) |
(static_cast<uint32_t>(src[2]) << 8) |
static_cast<uint32_t>(src[3]);
}
static uint16_t sender_id_from_short_id(uint16_t short_id) {
for (uint8_t i = 0; i < NUM_SENDERS; ++i) {
if (EXPECTED_SENDER_IDS[i] == short_id) {
return static_cast<uint16_t>(i + 1);
}
}
return 0;
}
static uint16_t short_id_from_sender_id(uint16_t sender_id) {
if (sender_id == 0 || sender_id > NUM_SENDERS) {
return 0;
}
return EXPECTED_SENDER_IDS[sender_id - 1];
}
static uint32_t compute_batch_rx_timeout_ms(uint16_t total_len, uint8_t chunk_count) {
if (total_len == 0 || chunk_count == 0) {
return 10000;
}
size_t max_chunk_payload = total_len > BATCH_CHUNK_PAYLOAD ? BATCH_CHUNK_PAYLOAD : total_len;
size_t payload_len = BATCH_HEADER_SIZE + max_chunk_payload;
size_t packet_len = 3 + payload_len + 2;
uint32_t per_chunk_toa_ms = lora_airtime_ms(packet_len);
uint32_t timeout_ms = static_cast<uint32_t>(chunk_count) * per_chunk_toa_ms + BATCH_RX_MARGIN_MS;
return timeout_ms < 10000 ? 10000 : timeout_ms;
}
static void send_batch_ack(uint16_t batch_id, uint8_t sample_count) {
uint32_t epoch = time_get_utc();
uint8_t time_valid = (time_is_synced() && epoch >= MIN_ACCEPTED_EPOCH_UTC) ? 1 : 0;
if (!time_valid) {
epoch = 0;
}
LoraPacket ack = {};
ack.msg_kind = LoraMsgKind::AckDown;
ack.device_id_short = g_short_id;
ack.payload_len = LORA_ACK_DOWN_PAYLOAD_LEN;
ack.payload[0] = time_valid;
write_u16_be(&ack.payload[1], batch_id);
write_u32_be(&ack.payload[3], epoch);
uint8_t repeats = ACK_REPEAT_COUNT == 0 ? 1 : ACK_REPEAT_COUNT;
for (uint8_t i = 0; i < repeats; ++i) {
lora_send(ack);
if (i + 1 < repeats && ACK_REPEAT_DELAY_MS > 0) {
delay(ACK_REPEAT_DELAY_MS);
}
}
serial_debug_printf("ack: tx batch_id=%u time_valid=%u epoch=%lu samples=%u",
batch_id,
static_cast<unsigned>(time_valid),
static_cast<unsigned long>(epoch),
static_cast<unsigned>(sample_count));
lora_receive_continuous();
}
static void reset_batch_rx() {
batch_reassembly_reset(g_batch_rx);
}
static bool process_batch_packet(const LoraPacket &pkt, BatchInput &out_batch, bool &decode_error, uint16_t &out_batch_id) {
decode_error = false;
if (pkt.payload_len < BATCH_HEADER_SIZE) {
return false;
}
uint16_t batch_id = read_u16_le(&pkt.payload[0]);
uint8_t chunk_index = pkt.payload[2];
uint8_t chunk_count = pkt.payload[3];
uint16_t total_len = read_u16_le(&pkt.payload[4]);
const uint8_t *chunk_data = &pkt.payload[BATCH_HEADER_SIZE];
size_t chunk_len = pkt.payload_len - BATCH_HEADER_SIZE;
uint32_t now_ms = millis();
uint16_t complete_len = 0;
BatchReassemblyStatus reassembly_status = batch_reassembly_push(
g_batch_rx, batch_id, chunk_index, chunk_count, total_len, chunk_data, chunk_len, now_ms,
compute_batch_rx_timeout_ms(total_len, chunk_count), BATCH_MAX_COMPRESSED, g_batch_rx_buffer,
sizeof(g_batch_rx_buffer), complete_len);
if (reassembly_status == BatchReassemblyStatus::ErrorReset) {
return false;
}
if (reassembly_status == BatchReassemblyStatus::InProgress) {
return false;
}
if (reassembly_status == BatchReassemblyStatus::Complete) {
if (!decode_batch(g_batch_rx_buffer, complete_len, &out_batch)) {
decode_error = true;
return false;
}
out_batch_id = batch_id;
return true;
}
return false;
}
// Helper function to attempt WiFi reconnection when stuck in AP mode
// Retries WiFi connection periodically (configurable WIFI_RECONNECT_INTERVAL_MS)
// to recover from temporary WiFi outages
static void try_wifi_reconnect_if_in_ap_mode() {
if (!g_ap_mode) {
// Already in STA mode, no need to reconnect
return;
}
if (!g_shared || g_shared->wifi_config.ssid.length() == 0) {
// No valid WiFi config to reconnect with
return;
}
uint32_t now_ms = millis();
if (g_shared->last_wifi_reconnect_attempt_ms == 0 ||
now_ms - g_shared->last_wifi_reconnect_attempt_ms >= WIFI_RECONNECT_INTERVAL_MS) {
// Update the last attempt time
g_shared->last_wifi_reconnect_attempt_ms = now_ms;
if (SERIAL_DEBUG_MODE) {
serial_debug_printf("wifi_reconnect: attempting to reconnect from AP mode");
}
// Try to reconnect with 10 second timeout
if (wifi_try_reconnect_sta(g_shared->wifi_config, 10000)) {
// Reconnection successful!
g_ap_mode = false;
if (SERIAL_DEBUG_MODE) {
serial_debug_printf("wifi_reconnect: reconnection successful, switching from AP to STA mode");
}
} else {
// Reconnection failed, restore AP mode to ensure web interface is available
if (g_shared->ap_ssid[0] != '\0') {
wifi_restore_ap_mode(g_shared->ap_ssid, g_shared->ap_password);
if (SERIAL_DEBUG_MODE) {
serial_debug_printf("wifi_reconnect: reconnection failed, restored AP mode");
}
}
}
}
}
static void receiver_loop() {
watchdog_kick();
LoraPacket pkt = {};
if (lora_receive(pkt, 0)) {
if (pkt.msg_kind == LoraMsgKind::BatchUp) {
BatchInput batch = {};
bool decode_error = false;
uint16_t batch_id = 0;
if (process_batch_packet(pkt, batch, decode_error, batch_id)) {
int8_t sender_idx = -1;
for (uint8_t i = 0; i < NUM_SENDERS; ++i) {
if (pkt.device_id_short == EXPECTED_SENDER_IDS[i]) {
sender_idx = static_cast<int8_t>(i);
break;
}
}
if (sender_idx < 0) {
receiver_note_rx_reject(RxRejectReason::UnknownSender, "batch");
note_fault(g_receiver_faults, g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms, FaultType::Decode);
display_set_last_error(g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms);
serial_debug_printf("batch: reject unknown_sender short_id=%04X sender_id=%u batch_id=%u",
pkt.device_id_short,
static_cast<unsigned>(batch.sender_id),
static_cast<unsigned>(batch_id));
goto receiver_loop_done;
}
uint16_t expected_sender_id = static_cast<uint16_t>(sender_idx + 1);
if (batch.sender_id != expected_sender_id) {
receiver_note_rx_reject(RxRejectReason::DeviceIdMismatch, "batch");
note_fault(g_receiver_faults, g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms, FaultType::Decode);
display_set_last_error(g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms);
serial_debug_printf("batch: reject device_id_mismatch short_id=%04X sender_id=%u expected=%u batch_id=%u",
pkt.device_id_short,
static_cast<unsigned>(batch.sender_id),
static_cast<unsigned>(expected_sender_id),
static_cast<unsigned>(batch_id));
goto receiver_loop_done;
}
bool duplicate = g_last_batch_id_rx[sender_idx] == batch_id;
SenderStatus &status = g_sender_statuses[sender_idx];
if (status.rx_batches_total < UINT32_MAX) {
status.rx_batches_total++;
}
if (duplicate) {
if (status.rx_batches_duplicate < UINT32_MAX) {
status.rx_batches_duplicate++;
}
uint32_t duplicate_ts = time_get_utc();
if (duplicate_ts == 0) {
duplicate_ts = batch.t_last;
}
status.rx_last_duplicate_ts_utc = duplicate_ts;
}
send_batch_ack(batch_id, batch.n);
if (duplicate) {
goto receiver_loop_done;
}
g_last_batch_id_rx[sender_idx] = batch_id;
if (batch.n == 0) {
goto receiver_loop_done;
}
if (batch.n > METER_BATCH_MAX_SAMPLES) {
note_fault(g_receiver_faults, g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms, FaultType::Decode);
display_set_last_error(g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms);
goto receiver_loop_done;
}
if (bit_count32(batch.present_mask) != batch.n) {
note_fault(g_receiver_faults, g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms, FaultType::Decode);
display_set_last_error(g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms);
goto receiver_loop_done;
}
size_t count = batch.n;
uint16_t short_id = pkt.device_id_short;
if (short_id == 0) {
short_id = short_id_from_sender_id(batch.sender_id);
}
if (batch.t_last < static_cast<uint32_t>(METER_BATCH_MAX_SAMPLES - 1) || batch.t_last < MIN_ACCEPTED_EPOCH_UTC) {
note_fault(g_receiver_faults, g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms, FaultType::Decode);
display_set_last_error(g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms);
goto receiver_loop_done;
}
const uint32_t window_start = batch.t_last - static_cast<uint32_t>(METER_BATCH_MAX_SAMPLES - 1);
MeterData samples[METER_BATCH_MAX_SAMPLES];
float bat_v = batch.battery_mV > 0 ? static_cast<float>(batch.battery_mV) / 1000.0f : NAN;
size_t s = 0;
for (uint8_t slot = 0; slot < METER_BATCH_MAX_SAMPLES; ++slot) {
if ((batch.present_mask & (1UL << slot)) == 0) {
continue;
}
if (s >= count) {
note_fault(g_receiver_faults, g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms, FaultType::Decode);
display_set_last_error(g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms);
goto receiver_loop_done;
}
MeterData &data = samples[s];
data = {};
data.short_id = short_id;
if (short_id != 0) {
snprintf(data.device_id, sizeof(data.device_id), "dd3-%04X", short_id);
} else {
snprintf(data.device_id, sizeof(data.device_id), "dd3-0000");
}
data.ts_utc = window_start + static_cast<uint32_t>(slot);
if (data.ts_utc < MIN_ACCEPTED_EPOCH_UTC) {
note_fault(g_receiver_faults, g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms, FaultType::Decode);
display_set_last_error(g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms);
goto receiver_loop_done;
}
data.energy_total_kwh = static_cast<float>(batch.energy_wh[s]) / 1000.0f;
data.phase_power_w[0] = static_cast<float>(batch.p1_w[s]);
data.phase_power_w[1] = static_cast<float>(batch.p2_w[s]);
data.phase_power_w[2] = static_cast<float>(batch.p3_w[s]);
data.total_power_w = data.phase_power_w[0] + data.phase_power_w[1] + data.phase_power_w[2];
data.battery_voltage_v = bat_v;
data.battery_percent = !isnan(bat_v) ? battery_percent_from_voltage(bat_v) : 0;
data.valid = true;
data.link_valid = true;
data.link_rssi_dbm = pkt.rssi_dbm;
data.link_snr_db = pkt.snr_db;
data.err_meter_read = batch.err_m;
data.err_decode = batch.err_d;
data.err_lora_tx = batch.err_tx;
data.last_error = static_cast<FaultType>(batch.err_last);
data.rx_reject_reason = batch.err_rx_reject;
sd_logger_log_sample(data, (s + 1 == count) && data.last_error != FaultType::None);
s++;
}
if (s != count) {
note_fault(g_receiver_faults, g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms, FaultType::Decode);
display_set_last_error(g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms);
goto receiver_loop_done;
}
web_server_set_last_batch(static_cast<uint8_t>(sender_idx), samples, count);
for (size_t s = 0; s < count; ++s) {
mqtt_publish_sample(samples[s]);
}
g_sender_statuses[sender_idx].last_data = samples[count - 1];
g_sender_statuses[sender_idx].last_update_ts_utc = samples[count - 1].ts_utc;
g_sender_statuses[sender_idx].has_data = true;
g_sender_faults_remote[sender_idx].meter_read_fail = samples[count - 1].err_meter_read;
g_sender_faults_remote[sender_idx].lora_tx_fail = samples[count - 1].err_lora_tx;
g_sender_last_error_remote[sender_idx] = samples[count - 1].last_error;
g_sender_last_error_remote_utc[sender_idx] = time_get_utc();
g_sender_last_error_remote_ms[sender_idx] = millis();
if (ENABLE_HA_DISCOVERY && !g_sender_discovery_sent[sender_idx]) {
g_sender_discovery_sent[sender_idx] = mqtt_publish_discovery(samples[count - 1].device_id);
}
publish_faults_if_needed(samples[count - 1].device_id, g_sender_faults_remote[sender_idx], g_sender_faults_remote_published[sender_idx],
g_sender_last_error_remote[sender_idx], g_sender_last_error_remote_published[sender_idx],
g_sender_last_error_remote_utc[sender_idx], g_sender_last_error_remote_ms[sender_idx]);
} else if (decode_error) {
note_fault(g_receiver_faults, g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms, FaultType::Decode);
display_set_last_error(g_receiver_last_error, g_receiver_last_error_utc, g_receiver_last_error_ms);
}
}
}
receiver_loop_done:
// Try to reconnect to WiFi if stuck in AP mode due to unreliable WiFi
try_wifi_reconnect_if_in_ap_mode();
mqtt_loop();
web_server_loop();
if (ENABLE_HA_DISCOVERY && !g_receiver_discovery_sent) {
g_receiver_discovery_sent = mqtt_publish_discovery(g_device_id);
}
publish_faults_if_needed(g_device_id, g_receiver_faults, g_receiver_faults_published,
g_receiver_last_error, g_receiver_last_error_published, g_receiver_last_error_utc, g_receiver_last_error_ms);
display_set_receiver_status(g_ap_mode, wifi_is_connected() ? wifi_get_ssid().c_str() : "AP", mqtt_is_connected());
display_tick();
watchdog_kick();
}
} // namespace
bool ReceiverPipeline::begin(const ReceiverPipelineConfig &config) {
if (!config.shared) {
return false;
}
g_shared = config.shared;
*g_shared = {};
g_short_id = config.short_id;
if (config.device_id) {
strncpy(g_device_id, config.device_id, sizeof(g_device_id));
g_device_id[sizeof(g_device_id) - 1] = '\0';
} else {
g_device_id[0] = '\0';
}
init_sender_statuses();
reset_batch_rx();
g_receiver_rx_reject_reason = RxRejectReason::None;
g_receiver_rx_reject_log_ms = 0;
return true;
}
void ReceiverPipeline::loop() {
if (!g_shared) {
return;
}
receiver_loop();
}
ReceiverStats ReceiverPipeline::stats() const {
ReceiverStats stats = {};
if (!g_shared) {
return stats;
}
stats.receiver_decode_fail = g_receiver_faults.decode_fail;
stats.receiver_lora_tx_fail = g_receiver_faults.lora_tx_fail;
stats.last_rx_reject = g_receiver_rx_reject_reason;
stats.receiver_discovery_sent = g_receiver_discovery_sent;
return stats;
}

View File

@@ -1,27 +0,0 @@
#pragma once
#include <Arduino.h>
#include "app_context.h"
#include "data_model.h"
struct ReceiverPipelineConfig {
uint16_t short_id;
const char *device_id;
ReceiverSharedState *shared;
};
struct ReceiverStats {
uint32_t receiver_decode_fail;
uint32_t receiver_lora_tx_fail;
RxRejectReason last_rx_reject;
bool receiver_discovery_sent;
};
class ReceiverPipeline {
public:
bool begin(const ReceiverPipelineConfig &config);
void loop();
ReceiverStats stats() const;
};

View File

@@ -27,30 +27,15 @@ static bool ensure_dir(const String &path) {
return SD.mkdir(path); return SD.mkdir(path);
} }
static String format_date_local(uint32_t ts_utc) { static String format_date_utc(uint32_t ts_utc) {
time_t t = static_cast<time_t>(ts_utc); time_t t = static_cast<time_t>(ts_utc);
struct tm tm_local; struct tm tm_utc;
localtime_r(&t, &tm_local); gmtime_r(&t, &tm_utc);
char buf[16]; char buf[16];
snprintf(buf, sizeof(buf), "%04d-%02d-%02d", snprintf(buf, sizeof(buf), "%04d-%02d-%02d",
tm_local.tm_year + 1900, tm_utc.tm_year + 1900,
tm_local.tm_mon + 1, tm_utc.tm_mon + 1,
tm_local.tm_mday); tm_utc.tm_mday);
return String(buf);
}
static String format_hms_local(uint32_t ts_utc) {
if (ts_utc == 0) {
return "";
}
time_t t = static_cast<time_t>(ts_utc);
struct tm tm_local;
localtime_r(&t, &tm_local);
char buf[16];
snprintf(buf, sizeof(buf), "%02d:%02d:%02d",
tm_local.tm_hour,
tm_local.tm_min,
tm_local.tm_sec);
return String(buf); return String(buf);
} }
@@ -94,7 +79,7 @@ void sd_logger_log_sample(const MeterData &data, bool include_error_text) {
return; return;
} }
String filename = sender_dir + "/" + format_date_local(data.ts_utc) + ".csv"; String filename = sender_dir + "/" + format_date_utc(data.ts_utc) + ".csv";
bool new_file = !SD.exists(filename); bool new_file = !SD.exists(filename);
File f = SD.open(filename, FILE_APPEND); File f = SD.open(filename, FILE_APPEND);
if (!f) { if (!f) {
@@ -102,14 +87,11 @@ void sd_logger_log_sample(const MeterData &data, bool include_error_text) {
} }
if (new_file) { if (new_file) {
f.println("ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last"); f.println("ts_utc,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last");
} }
String ts_hms_local = format_hms_local(data.ts_utc);
f.print(data.ts_utc); f.print(data.ts_utc);
f.print(','); f.print(',');
f.print(ts_hms_local);
f.print(',');
f.print(data.total_power_w, 1); f.print(data.total_power_w, 1);
f.print(','); f.print(',');
f.print(data.phase_power_w[0], 1); f.print(data.phase_power_w[0], 1);

File diff suppressed because it is too large Load Diff

View File

@@ -1,44 +0,0 @@
#pragma once
#include <Arduino.h>
struct SenderStateMachineConfig {
uint16_t short_id;
const char *device_id;
};
struct SenderStats {
uint8_t queue_depth;
uint8_t build_count;
uint16_t inflight_batch_id;
uint16_t last_sent_batch_id;
uint16_t last_acked_batch_id;
uint8_t retry_count;
bool ack_pending;
uint32_t ack_timeout_total;
uint32_t ack_retry_total;
uint32_t ack_miss_streak;
uint32_t rx_window_ms;
uint32_t sleep_ms;
};
class SenderStateMachine {
public:
bool begin(const SenderStateMachineConfig &config);
void loop();
SenderStats stats() const;
private:
enum class State : uint8_t {
Syncing = 0,
Normal = 1,
Catchup = 2,
WaitAck = 3
};
void handleMeterRead(uint32_t now_ms);
void maybeSendBatch(uint32_t now_ms);
void handleAckWindow(uint32_t now_ms);
bool applyTimeFromAck(uint8_t time_valid, uint32_t ack_epoch);
void validateInvariants();
};

View File

@@ -12,8 +12,93 @@
static uint32_t g_last_test_ms = 0; static uint32_t g_last_test_ms = 0;
static uint16_t g_test_code_counter = 1000; static uint16_t g_test_code_counter = 1000;
static uint16_t g_test_batch_id = 1;
static uint16_t g_test_last_acked_batch_id = 0;
static constexpr uint32_t TEST_SEND_INTERVAL_MS = 30000; static constexpr uint32_t TEST_SEND_INTERVAL_MS = 30000;
static void write_u16_be(uint8_t *dst, uint16_t value) {
dst[0] = static_cast<uint8_t>((value >> 8) & 0xFF);
dst[1] = static_cast<uint8_t>(value & 0xFF);
}
static uint16_t read_u16_be(const uint8_t *src) {
return static_cast<uint16_t>(src[0] << 8) | static_cast<uint16_t>(src[1]);
}
static void write_u32_be(uint8_t *dst, uint32_t value) {
dst[0] = static_cast<uint8_t>((value >> 24) & 0xFF);
dst[1] = static_cast<uint8_t>((value >> 16) & 0xFF);
dst[2] = static_cast<uint8_t>((value >> 8) & 0xFF);
dst[3] = static_cast<uint8_t>(value & 0xFF);
}
static uint32_t read_u32_be(const uint8_t *src) {
return (static_cast<uint32_t>(src[0]) << 24) |
(static_cast<uint32_t>(src[1]) << 16) |
(static_cast<uint32_t>(src[2]) << 8) |
static_cast<uint32_t>(src[3]);
}
static uint32_t ack_window_ms() {
uint32_t air_ms = lora_airtime_ms(lora_frame_size(LORA_ACK_DOWN_PAYLOAD_LEN));
uint32_t window_ms = air_ms + 300;
if (window_ms < 1200) {
window_ms = 1200;
}
if (window_ms > 4000) {
window_ms = 4000;
}
return window_ms;
}
static bool receive_ack_for_batch(uint16_t batch_id, uint8_t &time_valid, uint32_t &ack_epoch, int16_t &rssi_dbm, float &snr_db) {
LoraPacket ack_pkt = {};
uint32_t window_ms = ack_window_ms();
bool got_ack = lora_receive_window(ack_pkt, window_ms);
if (!got_ack) {
got_ack = lora_receive_window(ack_pkt, window_ms / 2);
}
if (!got_ack || ack_pkt.msg_kind != LoraMsgKind::AckDown || ack_pkt.payload_len < LORA_ACK_DOWN_PAYLOAD_LEN) {
return false;
}
uint16_t ack_id = read_u16_be(&ack_pkt.payload[1]);
if (ack_id != batch_id) {
return false;
}
time_valid = ack_pkt.payload[0] & 0x01;
ack_epoch = read_u32_be(&ack_pkt.payload[3]);
rssi_dbm = ack_pkt.rssi_dbm;
snr_db = ack_pkt.snr_db;
return true;
}
static void send_test_ack(uint16_t self_short_id, uint16_t batch_id, uint8_t &time_valid, uint32_t &ack_epoch) {
ack_epoch = time_get_utc();
time_valid = (time_is_synced() && ack_epoch >= MIN_ACCEPTED_EPOCH_UTC) ? 1 : 0;
if (!time_valid) {
ack_epoch = 0;
}
LoraPacket ack = {};
ack.msg_kind = LoraMsgKind::AckDown;
ack.device_id_short = self_short_id;
ack.payload_len = LORA_ACK_DOWN_PAYLOAD_LEN;
ack.payload[0] = time_valid;
write_u16_be(&ack.payload[1], batch_id);
write_u32_be(&ack.payload[3], ack_epoch);
uint8_t repeats = ACK_REPEAT_COUNT == 0 ? 1 : ACK_REPEAT_COUNT;
for (uint8_t i = 0; i < repeats; ++i) {
lora_send(ack);
if (i + 1 < repeats && ACK_REPEAT_DELAY_MS > 0) {
delay(ACK_REPEAT_DELAY_MS);
}
}
lora_receive_continuous();
}
void test_sender_loop(uint16_t short_id, const char *device_id) { void test_sender_loop(uint16_t short_id, const char *device_id) {
if (millis() - g_last_test_ms < TEST_SEND_INTERVAL_MS) { if (millis() - g_last_test_ms < TEST_SEND_INTERVAL_MS) {
return; return;
@@ -36,11 +121,13 @@ void test_sender_loop(uint16_t short_id, const char *device_id) {
uint32_t now_utc = time_get_utc(); uint32_t now_utc = time_get_utc();
uint32_t ts = now_utc > 0 ? now_utc : millis() / 1000; uint32_t ts = now_utc > 0 ? now_utc : millis() / 1000;
StaticJsonDocument<128> doc; StaticJsonDocument<192> doc;
doc["id"] = device_id; doc["id"] = device_id;
doc["role"] = "sender"; doc["role"] = "sender";
doc["test_code"] = code; doc["test_code"] = code;
doc["ts"] = ts; doc["ts"] = ts;
doc["batch_id"] = g_test_batch_id;
doc["last_acked"] = g_test_last_acked_batch_id;
char bat_buf[8]; char bat_buf[8];
snprintf(bat_buf, sizeof(bat_buf), "%.2f", data.battery_voltage_v); snprintf(bat_buf, sizeof(bat_buf), "%.2f", data.battery_voltage_v);
doc["bat_v"] = serialized(bat_buf); doc["bat_v"] = serialized(bat_buf);
@@ -60,11 +147,32 @@ void test_sender_loop(uint16_t short_id, const char *device_id) {
pkt.device_id_short = short_id; pkt.device_id_short = short_id;
pkt.payload_len = json.length(); pkt.payload_len = json.length();
memcpy(pkt.payload, json.c_str(), pkt.payload_len); memcpy(pkt.payload, json.c_str(), pkt.payload_len);
lora_send(pkt); if (!lora_send(pkt)) {
return;
}
uint8_t time_valid = 0;
uint32_t ack_epoch = 0;
int16_t ack_rssi = 0;
float ack_snr = 0.0f;
if (receive_ack_for_batch(g_test_batch_id, time_valid, ack_epoch, ack_rssi, ack_snr)) {
if (time_valid == 1 && ack_epoch >= MIN_ACCEPTED_EPOCH_UTC) {
time_set_utc(ack_epoch);
}
g_test_last_acked_batch_id = g_test_batch_id;
g_test_batch_id++;
if (SERIAL_DEBUG_MODE) {
Serial.printf("test ack: batch=%u time_valid=%u epoch=%lu rssi=%d snr=%.1f\n",
static_cast<unsigned>(g_test_last_acked_batch_id),
static_cast<unsigned>(time_valid),
static_cast<unsigned long>(ack_epoch),
static_cast<int>(ack_rssi),
static_cast<double>(ack_snr));
}
}
} }
void test_receiver_loop(SenderStatus *statuses, uint8_t count, uint16_t self_short_id) { void test_receiver_loop(SenderStatus *statuses, uint8_t count, uint16_t self_short_id) {
(void)self_short_id;
LoraPacket pkt = {}; LoraPacket pkt = {};
if (!lora_receive(pkt, 0)) { if (!lora_receive(pkt, 0)) {
return; return;
@@ -73,22 +181,28 @@ void test_receiver_loop(SenderStatus *statuses, uint8_t count, uint16_t self_sho
return; return;
} }
uint8_t decompressed[160]; uint8_t decompressed[192];
if (pkt.payload_len >= sizeof(decompressed)) { if (pkt.payload_len >= sizeof(decompressed)) {
return; return;
} }
memcpy(decompressed, pkt.payload, pkt.payload_len); memcpy(decompressed, pkt.payload, pkt.payload_len);
decompressed[pkt.payload_len] = '\0'; decompressed[pkt.payload_len] = '\0';
StaticJsonDocument<128> doc; StaticJsonDocument<192> doc;
if (deserializeJson(doc, reinterpret_cast<const char *>(decompressed)) != DeserializationError::Ok) { if (deserializeJson(doc, reinterpret_cast<const char *>(decompressed)) != DeserializationError::Ok) {
return; return;
} }
const char *id = doc["id"] | ""; const char *id = doc["id"] | "";
const char *code = doc["test_code"] | ""; const char *code = doc["test_code"] | "";
uint16_t batch_id = static_cast<uint16_t>(doc["batch_id"] | 0);
uint32_t ts = doc["ts"] | 0;
float bat_v = doc["bat_v"] | NAN; float bat_v = doc["bat_v"] | NAN;
uint8_t time_valid = 0;
uint32_t ack_epoch = 0;
send_test_ack(self_short_id, batch_id, time_valid, ack_epoch);
for (uint8_t i = 0; i < count; ++i) { for (uint8_t i = 0; i < count; ++i) {
if (strncmp(statuses[i].last_data.device_id, id, sizeof(statuses[i].last_data.device_id)) == 0) { if (strncmp(statuses[i].last_data.device_id, id, sizeof(statuses[i].last_data.device_id)) == 0) {
display_set_test_code_for_sender(i, code); display_set_test_code_for_sender(i, code);
@@ -96,12 +210,34 @@ void test_receiver_loop(SenderStatus *statuses, uint8_t count, uint16_t self_sho
statuses[i].last_data.battery_voltage_v = bat_v; statuses[i].last_data.battery_voltage_v = bat_v;
statuses[i].last_data.battery_percent = battery_percent_from_voltage(bat_v); statuses[i].last_data.battery_percent = battery_percent_from_voltage(bat_v);
} }
statuses[i].last_data.link_valid = true;
statuses[i].last_data.link_rssi_dbm = pkt.rssi_dbm;
statuses[i].last_data.link_snr_db = pkt.snr_db;
statuses[i].last_data.ts_utc = ts;
statuses[i].last_acked_batch_id = batch_id;
statuses[i].has_data = true; statuses[i].has_data = true;
statuses[i].last_update_ts_utc = time_get_utc(); statuses[i].last_update_ts_utc = time_get_utc();
break; break;
} }
} }
mqtt_publish_test(id, String(reinterpret_cast<const char *>(decompressed))); StaticJsonDocument<256> mqtt_doc;
mqtt_doc["id"] = id;
mqtt_doc["role"] = "receiver";
mqtt_doc["test_code"] = code;
mqtt_doc["ts"] = ts;
mqtt_doc["batch_id"] = batch_id;
mqtt_doc["acked_batch_id"] = batch_id;
if (!isnan(bat_v)) {
mqtt_doc["bat_v"] = bat_v;
}
mqtt_doc["rssi"] = pkt.rssi_dbm;
mqtt_doc["snr"] = pkt.snr_db;
mqtt_doc["time_valid"] = time_valid;
mqtt_doc["ack_epoch"] = ack_epoch;
String mqtt_payload;
serializeJson(mqtt_doc, mqtt_payload);
mqtt_publish_test(id, mqtt_payload);
} }
#endif #endif

View File

@@ -1,15 +1,9 @@
#include "time_manager.h" #include "time_manager.h"
#include "config.h"
#include <time.h> #include <time.h>
#ifdef ARDUINO_ARCH_ESP32
#include <esp_sntp.h>
#endif
static bool g_time_synced = false; static bool g_time_synced = false;
static bool g_clock_plausible = false;
static bool g_tz_set = false; static bool g_tz_set = false;
static uint32_t g_last_sync_utc = 0; static uint32_t g_last_sync_utc = 0;
static constexpr uint32_t MIN_PLAUSIBLE_EPOCH_UTC = 1672531200UL; // 2023-01-01 00:00:00 UTC
static void note_last_sync(uint32_t epoch) { static void note_last_sync(uint32_t epoch) {
if (epoch == 0) { if (epoch == 0) {
@@ -18,83 +12,45 @@ static void note_last_sync(uint32_t epoch) {
g_last_sync_utc = epoch; g_last_sync_utc = epoch;
} }
static bool epoch_is_plausible(time_t epoch) {
return epoch >= static_cast<time_t>(MIN_PLAUSIBLE_EPOCH_UTC);
}
static void mark_synced(uint32_t epoch) {
if (epoch == 0) {
return;
}
g_time_synced = true;
g_clock_plausible = true;
note_last_sync(epoch);
}
#ifdef ARDUINO_ARCH_ESP32
static void ntp_sync_notification_cb(struct timeval *tv) {
time_t epoch = tv ? tv->tv_sec : time(nullptr);
if (!epoch_is_plausible(epoch)) {
return;
}
if (epoch > static_cast<time_t>(UINT32_MAX)) {
return;
}
mark_synced(static_cast<uint32_t>(epoch));
}
#endif
static void ensure_timezone_set() {
if (g_tz_set) {
return;
}
setenv("TZ", TIMEZONE_TZ, 1);
tzset();
g_tz_set = true;
}
void time_receiver_init(const char *ntp_server_1, const char *ntp_server_2) { void time_receiver_init(const char *ntp_server_1, const char *ntp_server_2) {
const char *server1 = (ntp_server_1 && ntp_server_1[0] != '\0') ? ntp_server_1 : "pool.ntp.org"; const char *server1 = (ntp_server_1 && ntp_server_1[0] != '\0') ? ntp_server_1 : "pool.ntp.org";
const char *server2 = (ntp_server_2 && ntp_server_2[0] != '\0') ? ntp_server_2 : "time.nist.gov"; const char *server2 = (ntp_server_2 && ntp_server_2[0] != '\0') ? ntp_server_2 : "time.nist.gov";
#ifdef ARDUINO_ARCH_ESP32
sntp_set_time_sync_notification_cb(ntp_sync_notification_cb);
#endif
configTime(0, 0, server1, server2); configTime(0, 0, server1, server2);
ensure_timezone_set(); if (!g_tz_set) {
setenv("TZ", "CET-1CEST,M3.5.0/2,M10.5.0/3", 1);
tzset();
g_tz_set = true;
}
} }
uint32_t time_get_utc() { uint32_t time_get_utc() {
time_t now = time(nullptr); time_t now = time(nullptr);
if (!epoch_is_plausible(now)) { if (now < 1672531200) {
g_clock_plausible = false;
return 0; return 0;
} }
g_clock_plausible = true; if (!g_time_synced) {
#ifdef ARDUINO_ARCH_ESP32 g_time_synced = true;
if (!g_time_synced && sntp_get_sync_status() == SNTP_SYNC_STATUS_COMPLETED) { note_last_sync(static_cast<uint32_t>(now));
mark_synced(static_cast<uint32_t>(now));
} }
#endif
return static_cast<uint32_t>(now); return static_cast<uint32_t>(now);
} }
bool time_is_synced() { bool time_is_synced() {
(void)time_get_utc(); return g_time_synced || time_get_utc() > 0;
return g_time_synced && g_clock_plausible;
} }
void time_set_utc(uint32_t epoch) { void time_set_utc(uint32_t epoch) {
ensure_timezone_set(); if (!g_tz_set) {
setenv("TZ", "CET-1CEST,M3.5.0/2,M10.5.0/3", 1);
tzset();
g_tz_set = true;
}
struct timeval tv; struct timeval tv;
tv.tv_sec = epoch; tv.tv_sec = epoch;
tv.tv_usec = 0; tv.tv_usec = 0;
settimeofday(&tv, nullptr); settimeofday(&tv, nullptr);
if (epoch_is_plausible(static_cast<time_t>(epoch))) { g_time_synced = true;
mark_synced(epoch); note_last_sync(epoch);
} else {
g_clock_plausible = false;
g_time_synced = false;
}
} }
void time_get_local_hhmm(char *out, size_t out_len) { void time_get_local_hhmm(char *out, size_t out_len) {

View File

@@ -57,33 +57,6 @@ static HistoryJob g_history = {};
static constexpr size_t SD_LIST_MAX_FILES = 200; static constexpr size_t SD_LIST_MAX_FILES = 200;
static constexpr size_t SD_DOWNLOAD_MAX_PATH = 160; static constexpr size_t SD_DOWNLOAD_MAX_PATH = 160;
static String format_local_hms(uint32_t ts_utc) {
if (ts_utc == 0) {
return "n/a";
}
time_t t = static_cast<time_t>(ts_utc);
struct tm tm_local;
localtime_r(&t, &tm_local);
char buf[24];
strftime(buf, sizeof(buf), "%H:%M:%S %Z", &tm_local);
return String(buf);
}
static String format_epoch_local_hms(uint32_t ts_utc) {
if (ts_utc == 0) {
return "n/a";
}
return String(ts_utc) + " (" + format_local_hms(ts_utc) + ")";
}
static uint32_t timestamp_age_seconds(uint32_t ts_utc) {
uint32_t now_utc = time_get_utc();
if (ts_utc == 0 || now_utc < ts_utc) {
return 0;
}
return now_utc - ts_utc;
}
static int32_t round_power_w(float value) { static int32_t round_power_w(float value) {
if (isnan(value)) { if (isnan(value)) {
return 0; return 0;
@@ -243,16 +216,7 @@ static void history_reset() {
g_history = {}; g_history = {};
} }
static String history_date_from_epoch_local(uint32_t ts_utc) { static String history_date_from_epoch(uint32_t ts_utc) {
time_t t = static_cast<time_t>(ts_utc);
struct tm tm_local;
localtime_r(&t, &tm_local);
char buf[16];
snprintf(buf, sizeof(buf), "%04d-%02d-%02d", tm_local.tm_year + 1900, tm_local.tm_mon + 1, tm_local.tm_mday);
return String(buf);
}
static String history_date_from_epoch_utc(uint32_t ts_utc) {
time_t t = static_cast<time_t>(ts_utc); time_t t = static_cast<time_t>(ts_utc);
struct tm tm_utc; struct tm tm_utc;
gmtime_r(&t, &tm_utc); gmtime_r(&t, &tm_utc);
@@ -261,40 +225,6 @@ static String history_date_from_epoch_utc(uint32_t ts_utc) {
return String(buf); return String(buf);
} }
static bool history_parse_u32_field(const char *start, size_t len, uint32_t &out) {
if (!start || len == 0 || len >= 16) {
return false;
}
char buf[16];
memcpy(buf, start, len);
buf[len] = '\0';
char *end = nullptr;
unsigned long value = strtoul(buf, &end, 10);
if (end == buf || *end != '\0' || value > static_cast<unsigned long>(UINT32_MAX)) {
return false;
}
out = static_cast<uint32_t>(value);
return true;
}
static bool history_parse_float_field(const char *start, size_t len, float &out) {
if (!start || len == 0 || len >= 24) {
return false;
}
char buf[24];
memcpy(buf, start, len);
buf[len] = '\0';
char *end = nullptr;
float value = strtof(buf, &end);
if (end == buf || *end != '\0') {
return false;
}
out = value;
return true;
}
static bool history_open_next_file() { static bool history_open_next_file() {
if (!g_history.active || g_history.done || g_history.error) { if (!g_history.active || g_history.done || g_history.error) {
return false; return false;
@@ -307,17 +237,8 @@ static bool history_open_next_file() {
g_history.done = true; g_history.done = true;
return false; return false;
} }
String local_date = history_date_from_epoch_local(day_ts); String path = String("/dd3/") + g_history.device_id + "/" + history_date_from_epoch(day_ts) + ".csv";
String path = String("/dd3/") + g_history.device_id + "/" + local_date + ".csv";
g_history.file = SD.open(path.c_str(), FILE_READ); g_history.file = SD.open(path.c_str(), FILE_READ);
if (!g_history.file) {
// Compatibility fallback for files written before local-date partitioning.
String utc_date = history_date_from_epoch_utc(day_ts);
if (utc_date != local_date) {
String legacy_path = String("/dd3/") + g_history.device_id + "/" + utc_date + ".csv";
g_history.file = SD.open(legacy_path.c_str(), FILE_READ);
}
}
g_history.day_index++; g_history.day_index++;
return true; return true;
} }
@@ -326,32 +247,36 @@ static bool history_parse_line(const char *line, uint32_t &ts_out, float &p_out)
if (!line || line[0] < '0' || line[0] > '9') { if (!line || line[0] < '0' || line[0] > '9') {
return false; return false;
} }
const char *comma = strchr(line, ',');
const char *comma1 = strchr(line, ','); if (!comma) {
if (!comma1) {
return false; return false;
} }
char ts_buf[16];
uint32_t ts = 0; size_t ts_len = static_cast<size_t>(comma - line);
if (!history_parse_u32_field(line, static_cast<size_t>(comma1 - line), ts)) { if (ts_len >= sizeof(ts_buf)) {
return false; return false;
} }
memcpy(ts_buf, line, ts_len);
const char *comma2 = strchr(comma1 + 1, ','); ts_buf[ts_len] = '\0';
if (!comma2) { char *end = nullptr;
uint32_t ts = static_cast<uint32_t>(strtoul(ts_buf, &end, 10));
if (end == ts_buf) {
return false; return false;
} }
const char *p_start = comma + 1;
float p = 0.0f;
if (!history_parse_float_field(comma1 + 1, static_cast<size_t>(comma2 - (comma1 + 1)), p)) {
const char *p_start = comma2 + 1;
const char *p_end = strchr(p_start, ','); const char *p_end = strchr(p_start, ',');
char p_buf[16];
size_t p_len = p_end ? static_cast<size_t>(p_end - p_start) : strlen(p_start); size_t p_len = p_end ? static_cast<size_t>(p_end - p_start) : strlen(p_start);
if (!history_parse_float_field(p_start, p_len, p)) { if (p_len >= sizeof(p_buf)) {
return false; return false;
} }
memcpy(p_buf, p_start, p_len);
p_buf[p_len] = '\0';
char *endp = nullptr;
float p = strtof(p_buf, &endp);
if (endp == p_buf) {
return false;
} }
ts_out = ts; ts_out = ts;
p_out = p; p_out = p;
return true; return true;
@@ -438,6 +363,7 @@ static String render_sender_block(const SenderStatus &status) {
s += " RSSI:" + String(status.last_data.link_rssi_dbm) + " SNR:" + String(status.last_data.link_snr_db, 1); s += " RSSI:" + String(status.last_data.link_rssi_dbm) + " SNR:" + String(status.last_data.link_snr_db, 1);
} }
if (status.has_data) { if (status.has_data) {
s += " ack:" + String(status.last_acked_batch_id);
s += " err_tx:" + String(status.last_data.err_lora_tx); s += " err_tx:" + String(status.last_data.err_lora_tx);
s += " err_last:" + String(static_cast<uint8_t>(status.last_data.last_error)); s += " err_last:" + String(static_cast<uint8_t>(status.last_data.last_error));
s += " (" + String(fault_text(status.last_data.last_error)) + ")"; s += " (" + String(fault_text(status.last_data.last_error)) + ")";
@@ -449,29 +375,21 @@ static String render_sender_block(const SenderStatus &status) {
if (!status.has_data) { if (!status.has_data) {
s += "No data"; s += "No data";
} else { } else {
s += "Last update: " + format_epoch_local_hms(status.last_update_ts_utc); if (status.last_data.energy_multi) {
if (time_is_synced()) { s += "Energy1: " + String(status.last_data.energy_kwh_int[0]) + " kWh<br>";
s += " (" + String(timestamp_age_seconds(status.last_update_ts_utc)) + "s ago)"; s += "Energy2: " + String(status.last_data.energy_kwh_int[1]) + " kWh<br>";
if (status.last_data.energy_meter_count >= 3) {
s += "Energy3: " + String(status.last_data.energy_kwh_int[2]) + " kWh<br>";
} }
s += "<br>"; } else {
s += "Energy: " + String(status.last_data.energy_total_kwh, 2) + " kWh<br>"; s += "Energy: " + String(status.last_data.energy_total_kwh, 2) + " kWh<br>";
s += "Power: " + String(round_power_w(status.last_data.total_power_w)) + " W<br>"; s += "Power: " + String(round_power_w(status.last_data.total_power_w)) + " W<br>";
s += "P1/P2/P3: " + String(round_power_w(status.last_data.phase_power_w[0])) + " / " + s += "P1/P2/P3: " + String(round_power_w(status.last_data.phase_power_w[0])) + " / " +
String(round_power_w(status.last_data.phase_power_w[1])) + " / " + String(round_power_w(status.last_data.phase_power_w[1])) + " / " +
String(round_power_w(status.last_data.phase_power_w[2])) + " W<br>"; String(round_power_w(status.last_data.phase_power_w[2])) + " W<br>";
}
s += "Battery: " + String(status.last_data.battery_percent) + "% (" + String(status.last_data.battery_voltage_v, 2) + " V)"; s += "Battery: " + String(status.last_data.battery_percent) + "% (" + String(status.last_data.battery_voltage_v, 2) + " V)";
} }
uint32_t total_batches = status.rx_batches_total;
uint32_t duplicate_batches = status.rx_batches_duplicate;
float duplicate_pct = 0.0f;
if (total_batches > 0) {
duplicate_pct = (static_cast<float>(duplicate_batches) * 100.0f) / static_cast<float>(total_batches);
}
s += "<br>Dup batches: " + String(duplicate_batches) + "/" + String(total_batches) + " (" + String(duplicate_pct, 1) + "%)";
s += " last: " + format_epoch_local_hms(status.rx_last_duplicate_ts_utc);
if (time_is_synced() && status.rx_last_duplicate_ts_utc > 0) {
s += " (" + String(timestamp_age_seconds(status.rx_last_duplicate_ts_utc)) + "s ago)";
}
s += "</div>"; s += "</div>";
return s; return s;
} }
@@ -611,21 +529,10 @@ static void handle_wifi_post() {
cfg.ntp_server_2 = server.arg("ntp2"); cfg.ntp_server_2 = server.arg("ntp2");
} }
cfg.valid = true; cfg.valid = true;
if (!wifi_save_config(cfg)) {
if (SERIAL_DEBUG_MODE) {
Serial.println("wifi_cfg: save failed, reboot cancelled");
}
String html = html_header("WiFi/MQTT Config");
html += "<p style='color:#b00020;'>Save failed. Configuration was not persisted and reboot was cancelled.</p>";
html += "<p><a href='/wifi'>Back to config</a></p>";
html += html_footer();
server.send(500, "text/html", html);
return;
}
g_config = cfg; g_config = cfg;
g_web_user = cfg.web_user; g_web_user = cfg.web_user;
g_web_pass = cfg.web_pass; g_web_pass = cfg.web_pass;
wifi_save_config(cfg);
server.send(200, "text/html", "<html><body>Saved. Rebooting...</body></html>"); server.send(200, "text/html", "<html><body>Saved. Rebooting...</body></html>");
delay(1000); delay(1000);
ESP.restart(); ESP.restart();
@@ -693,11 +600,10 @@ static void handle_sender() {
html += "if(min===max){min=0;}"; html += "if(min===max){min=0;}";
html += "ctx.strokeStyle='#333';ctx.lineWidth=1;ctx.beginPath();"; html += "ctx.strokeStyle='#333';ctx.lineWidth=1;ctx.beginPath();";
html += "let first=true;"; html += "let first=true;";
html += "const xDen=series.length>1?(series.length-1):1;";
html += "for(let i=0;i<series.length;i++){"; html += "for(let i=0;i<series.length;i++){";
html += "const v=series[i][1];"; html += "const v=series[i][1];";
html += "if(v===null)continue;"; html += "if(v===null)continue;";
html += "const x=series.length>1?((i/xDen)*(w-2)+1):(w/2);"; html += "const x=(i/(series.length-1))* (w-2) + 1;";
html += "const y=h-2-((v-min)/(max-min))*(h-4);"; html += "const y=h-2-((v-min)/(max-min))*(h-4);";
html += "if(first){ctx.moveTo(x,y);first=false;} else {ctx.lineTo(x,y);} }"; html += "if(first){ctx.moveTo(x,y);first=false;} else {ctx.lineTo(x,y);} }";
html += "ctx.stroke();"; html += "ctx.stroke();";
@@ -708,15 +614,16 @@ static void handle_sender() {
if (g_last_batch_count[i] > 0) { if (g_last_batch_count[i] > 0) {
html += "<h3>Last batch (" + String(g_last_batch_count[i]) + " samples)</h3>"; html += "<h3>Last batch (" + String(g_last_batch_count[i]) + " samples)</h3>";
html += "<table border='1' cellspacing='0' cellpadding='3'>"; html += "<table border='1' cellspacing='0' cellpadding='3'>";
html += "<tr><th>#</th><th>ts_utc</th><th>ts_hms_local</th><th>e_kwh</th><th>p_w</th><th>p1_w</th><th>p2_w</th><th>p3_w</th>"; html += "<tr><th>#</th><th>ts</th><th>energy1_kwh</th><th>energy2_kwh</th><th>energy3_kwh</th><th>p_w</th><th>p1_w</th><th>p2_w</th><th>p3_w</th>";
html += "<th>bat_v</th><th>bat_pct</th><th>rssi</th><th>snr</th><th>err_tx</th><th>err_last</th><th>rx_reject</th></tr>"; html += "<th>bat_v</th><th>bat_pct</th><th>rssi</th><th>snr</th><th>err_tx</th><th>err_last</th><th>rx_reject</th></tr>";
for (uint8_t r = 0; r < g_last_batch_count[i]; ++r) { for (uint8_t r = 0; r < g_last_batch_count[i]; ++r) {
const MeterData &d = g_last_batch[i][r]; const MeterData &d = g_last_batch[i][r];
html += "<tr>"; html += "<tr>";
html += "<td>" + String(r) + "</td>"; html += "<td>" + String(r) + "</td>";
html += "<td>" + String(d.ts_utc) + "</td>"; html += "<td>" + String(d.ts_utc) + "</td>";
html += "<td>" + format_local_hms(d.ts_utc) + "</td>"; html += "<td>" + String(d.energy_kwh_int[0]) + "</td>";
html += "<td>" + String(d.energy_total_kwh, 2) + "</td>"; html += "<td>" + String(d.energy_kwh_int[1]) + "</td>";
html += "<td>" + String(d.energy_kwh_int[2]) + "</td>";
html += "<td>" + String(round_power_w(d.total_power_w)) + "</td>"; html += "<td>" + String(round_power_w(d.total_power_w)) + "</td>";
html += "<td>" + String(round_power_w(d.phase_power_w[0])) + "</td>"; html += "<td>" + String(round_power_w(d.phase_power_w[0])) + "</td>";
html += "<td>" + String(round_power_w(d.phase_power_w[1])) + "</td>"; html += "<td>" + String(round_power_w(d.phase_power_w[1])) + "</td>";
@@ -754,7 +661,7 @@ static void handle_manual() {
html += "<li>RSSI/SNR: LoRa link quality from last packet.</li>"; html += "<li>RSSI/SNR: LoRa link quality from last packet.</li>";
html += "<li>err_tx: sender-side LoRa TX error counter.</li>"; html += "<li>err_tx: sender-side LoRa TX error counter.</li>";
html += "<li>err_last: last error code (0=None, 1=MeterRead, 2=Decode, 3=LoraTx).</li>"; html += "<li>err_last: last error code (0=None, 1=MeterRead, 2=Decode, 3=LoraTx).</li>";
html += "<li>rx_reject: last RX reject reason (0=None, 1=crc_fail, 2=invalid_msg_kind, 3=length_mismatch, 4=device_id_mismatch, 5=batch_id_mismatch, 6=unknown_sender).</li>"; html += "<li>rx_reject: last RX reject reason (0=None, 1=crc_fail, 2=invalid_msg_kind, 3=length_mismatch, 4=device_id_mismatch, 5=batch_id_mismatch).</li>";
html += "<li>faults m/d/tx: receiver-side counters (meter read fails, decode fails, LoRa TX fails).</li>"; html += "<li>faults m/d/tx: receiver-side counters (meter read fails, decode fails, LoRa TX fails).</li>";
html += "<li>faults last: last receiver-side error code (same mapping as err_last).</li>"; html += "<li>faults last: last receiver-side error code (same mapping as err_last).</li>";
html += "</ul>"; html += "</ul>";
@@ -793,14 +700,12 @@ static void handle_history_start() {
if (res_min < SD_HISTORY_MIN_RES_MIN) { if (res_min < SD_HISTORY_MIN_RES_MIN) {
res_min = SD_HISTORY_MIN_RES_MIN; res_min = SD_HISTORY_MIN_RES_MIN;
} }
// Use uint64_t for intermediate calculation to prevent overflow uint32_t bins = (static_cast<uint32_t>(days) * 24UL * 60UL) / res_min;
uint64_t bins_64 = (static_cast<uint64_t>(days) * 24UL * 60UL) / res_min; if (bins == 0 || bins > SD_HISTORY_MAX_BINS) {
if (bins_64 == 0 || bins_64 > SD_HISTORY_MAX_BINS) {
String resp = String("{\"ok\":false,\"error\":\"too_many_bins\",\"max_bins\":") + SD_HISTORY_MAX_BINS + "}"; String resp = String("{\"ok\":false,\"error\":\"too_many_bins\",\"max_bins\":") + SD_HISTORY_MAX_BINS + "}";
server.send(200, "application/json", resp); server.send(200, "application/json", resp);
return; return;
} }
uint32_t bins = static_cast<uint32_t>(bins_64);
history_reset(); history_reset();
g_history.active = true; g_history.active = true;

View File

@@ -5,59 +5,6 @@
static Preferences prefs; static Preferences prefs;
static bool wifi_log_save_failure(const char *key, const char *reason) {
if (SERIAL_DEBUG_MODE) {
Serial.printf("wifi_cfg: save failed key=%s reason=%s\n", key, reason);
}
return false;
}
static bool wifi_write_string_pref(const char *key, const String &value) {
size_t written = prefs.putString(key, value);
if (written != value.length()) {
return wifi_log_save_failure(key, "write_short");
}
if (!prefs.isKey(key)) {
return wifi_log_save_failure(key, "missing_key");
}
String readback = prefs.getString(key, "");
if (readback != value) {
return wifi_log_save_failure(key, "verify_mismatch");
}
return true;
}
static bool wifi_write_bool_pref(const char *key, bool value) {
size_t written = prefs.putBool(key, value);
if (written != sizeof(uint8_t)) {
return wifi_log_save_failure(key, "write_short");
}
if (!prefs.isKey(key)) {
return wifi_log_save_failure(key, "missing_key");
}
bool readback = prefs.getBool(key, !value);
if (readback != value) {
return wifi_log_save_failure(key, "verify_mismatch");
}
return true;
}
static bool wifi_write_ushort_pref(const char *key, uint16_t value) {
size_t written = prefs.putUShort(key, value);
if (written != sizeof(uint16_t)) {
return wifi_log_save_failure(key, "write_short");
}
if (!prefs.isKey(key)) {
return wifi_log_save_failure(key, "missing_key");
}
uint16_t fallback = value == static_cast<uint16_t>(0xFFFF) ? 0 : static_cast<uint16_t>(0xFFFF);
uint16_t readback = prefs.getUShort(key, fallback);
if (readback != value) {
return wifi_log_save_failure(key, "verify_mismatch");
}
return true;
}
void wifi_manager_init() { void wifi_manager_init() {
prefs.begin("dd3cfg", false); prefs.begin("dd3cfg", false);
} }
@@ -81,39 +28,17 @@ bool wifi_load_config(WifiMqttConfig &config) {
} }
bool wifi_save_config(const WifiMqttConfig &config) { bool wifi_save_config(const WifiMqttConfig &config) {
if (!wifi_write_bool_pref("valid", true)) { prefs.putBool("valid", true);
return false; prefs.putString("ssid", config.ssid);
} prefs.putString("pass", config.password);
if (!wifi_write_string_pref("ssid", config.ssid)) { prefs.putString("mqhost", config.mqtt_host);
return false; prefs.putUShort("mqport", config.mqtt_port);
} prefs.putString("mquser", config.mqtt_user);
if (!wifi_write_string_pref("pass", config.password)) { prefs.putString("mqpass", config.mqtt_pass);
return false; prefs.putString("ntp1", config.ntp_server_1);
} prefs.putString("ntp2", config.ntp_server_2);
if (!wifi_write_string_pref("mqhost", config.mqtt_host)) { prefs.putString("webuser", config.web_user);
return false; prefs.putString("webpass", config.web_pass);
}
if (!wifi_write_ushort_pref("mqport", config.mqtt_port)) {
return false;
}
if (!wifi_write_string_pref("mquser", config.mqtt_user)) {
return false;
}
if (!wifi_write_string_pref("mqpass", config.mqtt_pass)) {
return false;
}
if (!wifi_write_string_pref("ntp1", config.ntp_server_1)) {
return false;
}
if (!wifi_write_string_pref("ntp2", config.ntp_server_2)) {
return false;
}
if (!wifi_write_string_pref("webuser", config.web_user)) {
return false;
}
if (!wifi_write_string_pref("webpass", config.web_pass)) {
return false;
}
return true; return true;
} }
@@ -143,52 +68,3 @@ bool wifi_is_connected() {
String wifi_get_ssid() { String wifi_get_ssid() {
return WiFi.SSID(); return WiFi.SSID();
} }
// Try to reconnect to WiFi with a shorter timeout (for periodic reconnection attempts)
// Called when device is stuck in AP mode and we want to try switching back to STA
bool wifi_try_reconnect_sta(const WifiMqttConfig &config, uint32_t timeout_ms) {
// Only attempt if not already connected and config is valid
if (WiFi.status() == WL_CONNECTED) {
return true;
}
// Check if config is valid
if (config.ssid.length() == 0 || config.mqtt_host.length() == 0) {
return false;
}
// Switch to STA mode and attempt connection with shorter timeout
WiFi.mode(WIFI_STA);
WiFi.begin(config.ssid.c_str(), config.password.c_str());
uint32_t start = millis();
while (WiFi.status() != WL_CONNECTED && millis() - start < timeout_ms) {
delay(200);
}
bool connected = WiFi.status() == WL_CONNECTED;
if (connected) {
esp_wifi_set_ps(WIFI_PS_MIN_MODEM);
if (SERIAL_DEBUG_MODE) {
Serial.printf("wifi_reconnect: success, connected to %s\n", config.ssid.c_str());
}
} else {
if (SERIAL_DEBUG_MODE) {
Serial.printf("wifi_reconnect: failed, remaining in STA mode\n");
}
}
return connected;
}
// Helper function to restore AP mode when reconnection attempt has failed
void wifi_restore_ap_mode(const char *ap_ssid, const char *ap_pass) {
if (WiFi.status() != WL_CONNECTED) {
// We're not connected to WiFi, restore AP mode
WiFi.mode(WIFI_AP);
WiFi.softAP(ap_ssid, ap_pass);
if (SERIAL_DEBUG_MODE) {
Serial.printf("wifi_restore_ap: AP mode restored\n");
}
}
}

View File

@@ -1,37 +0,0 @@
Set-StrictMode -Version Latest
$ErrorActionPreference = "Stop"
$repoRoot = (Resolve-Path (Join-Path $PSScriptRoot "..")).ProviderPath
$configPath = (Resolve-Path (Join-Path $repoRoot "include/config.h")).ProviderPath
$mqttPath = (Resolve-Path (Join-Path $repoRoot "src/mqtt_client.cpp")).ProviderPath
$configText = Get-Content -Raw -Path $configPath
if ($configText -notmatch 'HA_MANUFACTURER\[\]\s*=\s*"AcidBurns"\s*;') {
throw "include/config.h must define HA_MANUFACTURER as exactly ""AcidBurns""."
}
$mqttText = Get-Content -Raw -Path $mqttPath
if ($mqttText -notmatch 'device\["manufacturer"\]\s*=\s*HA_MANUFACTURER\s*;') {
throw "src/mqtt_client.cpp must assign device[""manufacturer""] from HA_MANUFACTURER."
}
if ($mqttText -match 'device\["manufacturer"\]\s*=\s*"[^"]+"\s*;') {
throw "src/mqtt_client.cpp must not hardcode manufacturer string literals."
}
$roots = @(
Join-Path $repoRoot "src"
Join-Path $repoRoot "include"
)
$literalHits = Get-ChildItem -Path $roots -Recurse -File -Include *.c,*.cc,*.cpp,*.h,*.hpp |
Select-String -Pattern '"AcidBurns"' |
Where-Object { (Resolve-Path $_.Path).ProviderPath -ne $configPath }
if ($literalHits) {
$details = $literalHits | ForEach-Object {
"$($_.Path):$($_.LineNumber)"
}
throw "Unexpected hardcoded ""AcidBurns"" literal(s) outside include/config.h:`n$($details -join "`n")"
}
Write-Host "HA manufacturer drift check passed."

View File

@@ -1,6 +1,5 @@
#include <Arduino.h> #include <Arduino.h>
#include <unity.h> #include <unity.h>
#include "dd3_legacy_core.h"
#include "html_util.h" #include "html_util.h"
static void test_html_escape_basic() { static void test_html_escape_basic() {
@@ -13,122 +12,25 @@ static void test_html_escape_basic() {
TEST_ASSERT_EQUAL_STRING("&amp;&lt;&gt;&quot;&#39;", html_escape("&<>\"'").c_str()); TEST_ASSERT_EQUAL_STRING("&amp;&lt;&gt;&quot;&#39;", html_escape("&<>\"'").c_str());
} }
static void test_html_escape_adversarial() { static void test_sanitize_device_id() {
TEST_ASSERT_EQUAL_STRING("&amp;amp;", html_escape("&amp;").c_str());
TEST_ASSERT_EQUAL_STRING("\n\r\t", html_escape("\n\r\t").c_str());
const String chunk = "<&>\"'abc\n\r\t";
const String escaped_chunk = "&lt;&amp;&gt;&quot;&#39;abc\n\r\t";
const size_t repeats = 300; // 3.3 KB input
String input;
String expected;
input.reserve(chunk.length() * repeats);
expected.reserve(escaped_chunk.length() * repeats);
for (size_t i = 0; i < repeats; ++i) {
input += chunk;
expected += escaped_chunk;
}
String out = html_escape(input);
TEST_ASSERT_EQUAL_UINT(expected.length(), out.length());
TEST_ASSERT_EQUAL_STRING(expected.c_str(), out.c_str());
TEST_ASSERT_TRUE(out.indexOf("&lt;&amp;&gt;&quot;&#39;abc") >= 0);
}
static void test_url_encode_component_table() {
struct Case {
const char *input;
const char *expected;
};
const Case cases[] = {
{"", ""},
{"abcABC012-_.~", "abcABC012-_.~"},
{"a b", "a%20b"},
{"/\\?&#%\"'", "%2F%5C%3F%26%23%25%22%27"},
{"line\nbreak", "line%0Abreak"},
};
for (size_t i = 0; i < (sizeof(cases) / sizeof(cases[0])); ++i) {
String out = url_encode_component(cases[i].input);
TEST_ASSERT_EQUAL_STRING(cases[i].expected, out.c_str());
}
String control;
control += static_cast<char>(0x01);
control += static_cast<char>(0x1F);
control += static_cast<char>(0x7F);
TEST_ASSERT_EQUAL_STRING("%01%1F%7F", url_encode_component(control).c_str());
const String long_chunk = "AZaz09-_.~ /%?";
const String long_expected_chunk = "AZaz09-_.~%20%2F%25%3F";
String long_input;
String long_expected;
for (size_t i = 0; i < 40; ++i) { // 520 chars
long_input += long_chunk;
long_expected += long_expected_chunk;
}
String long_out_1 = url_encode_component(long_input);
String long_out_2 = url_encode_component(long_input);
TEST_ASSERT_EQUAL_STRING(long_expected.c_str(), long_out_1.c_str());
TEST_ASSERT_EQUAL_STRING(long_out_1.c_str(), long_out_2.c_str());
}
static void test_sanitize_device_id_accepts_and_normalizes() {
String out; String out;
const char *accept_cases[] = { TEST_ASSERT_TRUE(sanitize_device_id("F19C", out));
"F19C",
"f19c",
" f19c ",
"dd3-f19c",
"dd3-F19C",
"dd3-a0b1",
};
for (size_t i = 0; i < (sizeof(accept_cases) / sizeof(accept_cases[0])); ++i) {
TEST_ASSERT_TRUE(sanitize_device_id(accept_cases[i], out));
if (String(accept_cases[i]).indexOf("a0b1") >= 0) {
TEST_ASSERT_EQUAL_STRING("dd3-A0B1", out.c_str());
} else {
TEST_ASSERT_EQUAL_STRING("dd3-F19C", out.c_str()); TEST_ASSERT_EQUAL_STRING("dd3-F19C", out.c_str());
} TEST_ASSERT_TRUE(sanitize_device_id("dd3-f19c", out));
} TEST_ASSERT_EQUAL_STRING("dd3-F19C", out.c_str());
} TEST_ASSERT_FALSE(sanitize_device_id("F19G", out));
TEST_ASSERT_FALSE(sanitize_device_id("dd3-12", out));
static void test_sanitize_device_id_rejects_invalid() { TEST_ASSERT_FALSE(sanitize_device_id("dd3-12345", out));
String out = "dd3-KEEP"; TEST_ASSERT_FALSE(sanitize_device_id("../F19C", out));
const char *reject_cases[] = { TEST_ASSERT_FALSE(sanitize_device_id("dd3-%2f", out));
"", TEST_ASSERT_FALSE(sanitize_device_id("dd3-12/3", out));
"F", TEST_ASSERT_FALSE(sanitize_device_id("dd3-12\\3", out));
"FFF",
"FFFFF",
"dd3-12",
"dd3-12345",
"F1 9C",
"dd3-F1\t9C",
"dd3-F19C%00",
"%F19C",
"../F19C",
"dd3-..1A",
"dd3-12/3",
"dd3-12\\3",
"F19G",
"dd3-zzzz",
};
for (size_t i = 0; i < (sizeof(reject_cases) / sizeof(reject_cases[0])); ++i) {
TEST_ASSERT_FALSE(sanitize_device_id(reject_cases[i], out));
}
TEST_ASSERT_EQUAL_STRING("dd3-KEEP", out.c_str());
} }
void setup() { void setup() {
dd3_legacy_core_force_link();
UNITY_BEGIN(); UNITY_BEGIN();
RUN_TEST(test_html_escape_basic); RUN_TEST(test_html_escape_basic);
RUN_TEST(test_html_escape_adversarial); RUN_TEST(test_sanitize_device_id);
RUN_TEST(test_url_encode_component_table);
RUN_TEST(test_sanitize_device_id_accepts_and_normalizes);
RUN_TEST(test_sanitize_device_id_rejects_invalid);
UNITY_END(); UNITY_END();
} }

View File

@@ -1,129 +0,0 @@
#include <Arduino.h>
#include <unity.h>
#include <ArduinoJson.h>
#include "config.h"
#include "data_model.h"
#include "dd3_legacy_core.h"
#include "ha_discovery_json.h"
#include "json_codec.h"
static void fill_state_sample(MeterData &data) {
data = {};
data.ts_utc = 1769905000;
data.short_id = 0xF19C;
strncpy(data.device_id, "dd3-F19C", sizeof(data.device_id));
data.energy_total_kwh = 1234.5678f;
data.total_power_w = 321.6f;
data.phase_power_w[0] = 100.4f;
data.phase_power_w[1] = 110.4f;
data.phase_power_w[2] = 110.8f;
data.battery_voltage_v = 3.876f;
data.battery_percent = 77;
data.link_valid = true;
data.link_rssi_dbm = -71;
data.link_snr_db = 7.25f;
data.err_meter_read = 1;
data.err_decode = 2;
data.err_lora_tx = 3;
data.last_error = FaultType::Decode;
data.rx_reject_reason = static_cast<uint8_t>(RxRejectReason::CrcFail);
}
static void test_state_json_required_keys_and_stability() {
MeterData data = {};
fill_state_sample(data);
String out_json;
TEST_ASSERT_TRUE(meterDataToJson(data, out_json));
StaticJsonDocument<512> doc;
DeserializationError err = deserializeJson(doc, out_json);
TEST_ASSERT_TRUE(err == DeserializationError::Ok);
const char *required_keys[] = {
"id", "ts", "e_kwh", "p_w", "p1_w", "p2_w", "p3_w",
"bat_v", "bat_pct", "rssi", "snr", "err_m", "err_d",
"err_tx", "err_last", "rx_reject", "rx_reject_text"};
for (size_t i = 0; i < (sizeof(required_keys) / sizeof(required_keys[0])); ++i) {
TEST_ASSERT_TRUE_MESSAGE(doc.containsKey(required_keys[i]), required_keys[i]);
}
TEST_ASSERT_EQUAL_STRING("F19C", doc["id"] | "");
TEST_ASSERT_EQUAL_UINT32(data.ts_utc, doc["ts"] | 0U);
TEST_ASSERT_EQUAL_UINT8(static_cast<uint8_t>(FaultType::Decode), doc["err_last"] | 0U);
TEST_ASSERT_EQUAL_UINT8(static_cast<uint8_t>(RxRejectReason::CrcFail), doc["rx_reject"] | 0U);
TEST_ASSERT_EQUAL_STRING("crc_fail", doc["rx_reject_text"] | "");
TEST_ASSERT_FALSE(doc.containsKey("energy_total_kwh"));
TEST_ASSERT_FALSE(doc.containsKey("power_w"));
TEST_ASSERT_FALSE(doc.containsKey("battery_voltage"));
}
static void test_state_json_optional_keys_when_not_available() {
MeterData data = {};
fill_state_sample(data);
data.link_valid = false;
data.err_meter_read = 0;
data.err_decode = 0;
data.err_lora_tx = 0;
data.rx_reject_reason = static_cast<uint8_t>(RxRejectReason::None);
String out_json;
TEST_ASSERT_TRUE(meterDataToJson(data, out_json));
StaticJsonDocument<512> doc;
DeserializationError err = deserializeJson(doc, out_json);
TEST_ASSERT_TRUE(err == DeserializationError::Ok);
TEST_ASSERT_FALSE(doc.containsKey("rssi"));
TEST_ASSERT_FALSE(doc.containsKey("snr"));
TEST_ASSERT_FALSE(doc.containsKey("err_m"));
TEST_ASSERT_FALSE(doc.containsKey("err_d"));
TEST_ASSERT_FALSE(doc.containsKey("err_tx"));
TEST_ASSERT_EQUAL_STRING("none", doc["rx_reject_text"] | "");
}
static void test_ha_discovery_manufacturer_and_key_stability() {
String payload;
TEST_ASSERT_TRUE(ha_build_discovery_sensor_payload(
"dd3-F19C", "energy", "Energy", "kWh", "energy",
"smartmeter/dd3-F19C/state", "{{ value_json.e_kwh }}",
HA_MANUFACTURER, payload));
StaticJsonDocument<384> doc;
DeserializationError err = deserializeJson(doc, payload);
TEST_ASSERT_TRUE(err == DeserializationError::Ok);
TEST_ASSERT_TRUE(doc.containsKey("name"));
TEST_ASSERT_TRUE(doc.containsKey("state_topic"));
TEST_ASSERT_TRUE(doc.containsKey("unique_id"));
TEST_ASSERT_TRUE(doc.containsKey("value_template"));
TEST_ASSERT_TRUE(doc.containsKey("device"));
TEST_ASSERT_EQUAL_STRING("dd3-F19C_energy", doc["unique_id"] | "");
TEST_ASSERT_EQUAL_STRING("smartmeter/dd3-F19C/state", doc["state_topic"] | "");
TEST_ASSERT_EQUAL_STRING("{{ value_json.e_kwh }}", doc["value_template"] | "");
JsonObject device = doc["device"].as<JsonObject>();
TEST_ASSERT_TRUE(device.containsKey("identifiers"));
TEST_ASSERT_TRUE(device.containsKey("name"));
TEST_ASSERT_TRUE(device.containsKey("model"));
TEST_ASSERT_TRUE(device.containsKey("manufacturer"));
TEST_ASSERT_EQUAL_STRING("DD3-LoRa-Bridge", device["model"] | "");
TEST_ASSERT_EQUAL_STRING("AcidBurns", device["manufacturer"] | "");
TEST_ASSERT_EQUAL_STRING("dd3-F19C", device["name"] | "");
TEST_ASSERT_EQUAL_STRING("dd3-F19C", device["identifiers"][0] | "");
}
void setup() {
dd3_legacy_core_force_link();
UNITY_BEGIN();
RUN_TEST(test_state_json_required_keys_and_stability);
RUN_TEST(test_state_json_optional_keys_when_not_available);
RUN_TEST(test_ha_discovery_manufacturer_and_key_stability);
UNITY_END();
}
void loop() {}

View File

@@ -1,131 +0,0 @@
#include <Arduino.h>
#include <unity.h>
#include "batch_reassembly_logic.h"
#include "lora_frame_logic.h"
static void test_crc16_known_vectors() {
const uint8_t canonical[] = {'1', '2', '3', '4', '5', '6', '7', '8', '9'};
TEST_ASSERT_EQUAL_HEX16(0x29B1, lora_crc16_ccitt(canonical, sizeof(canonical)));
const uint8_t binary[] = {0x00, 0x01, 0x02, 0x03, 0x04};
TEST_ASSERT_EQUAL_HEX16(0x1C0F, lora_crc16_ccitt(binary, sizeof(binary)));
}
static void test_frame_encode_decode_and_crc_reject() {
const uint8_t payload[] = {0x01, 0x02, 0xA5};
uint8_t frame[64] = {};
size_t frame_len = 0;
TEST_ASSERT_TRUE(lora_build_frame(0, 0xF19C, payload, sizeof(payload), frame, sizeof(frame), frame_len));
TEST_ASSERT_EQUAL_UINT(8, frame_len);
uint8_t out_kind = 0xFF;
uint16_t out_device_id = 0;
uint8_t out_payload[16] = {};
size_t out_payload_len = 0;
LoraFrameDecodeStatus ok = lora_parse_frame(frame, frame_len, 1, &out_kind, &out_device_id, out_payload,
sizeof(out_payload), &out_payload_len);
TEST_ASSERT_EQUAL_UINT8(static_cast<uint8_t>(LoraFrameDecodeStatus::Ok), static_cast<uint8_t>(ok));
TEST_ASSERT_EQUAL_UINT8(0, out_kind);
TEST_ASSERT_EQUAL_UINT16(0xF19C, out_device_id);
TEST_ASSERT_EQUAL_UINT(sizeof(payload), out_payload_len);
TEST_ASSERT_EQUAL_UINT8_ARRAY(payload, out_payload, sizeof(payload));
frame[frame_len - 1] ^= 0x01;
LoraFrameDecodeStatus bad_crc = lora_parse_frame(frame, frame_len, 1, &out_kind, &out_device_id, out_payload,
sizeof(out_payload), &out_payload_len);
TEST_ASSERT_EQUAL_UINT8(static_cast<uint8_t>(LoraFrameDecodeStatus::CrcFail), static_cast<uint8_t>(bad_crc));
}
static void test_frame_rejects_invalid_msg_kind_and_short_length() {
const uint8_t payload[] = {0x42};
uint8_t frame[32] = {};
size_t frame_len = 0;
TEST_ASSERT_TRUE(lora_build_frame(2, 0xF19C, payload, sizeof(payload), frame, sizeof(frame), frame_len));
uint8_t out_kind = 0;
uint16_t out_device_id = 0;
uint8_t out_payload[8] = {};
size_t out_payload_len = 0;
LoraFrameDecodeStatus invalid_msg = lora_parse_frame(frame, frame_len, 1, &out_kind, &out_device_id, out_payload,
sizeof(out_payload), &out_payload_len);
TEST_ASSERT_EQUAL_UINT8(static_cast<uint8_t>(LoraFrameDecodeStatus::InvalidMsgKind), static_cast<uint8_t>(invalid_msg));
LoraFrameDecodeStatus short_len = lora_parse_frame(frame, 4, 1, &out_kind, &out_device_id, out_payload,
sizeof(out_payload), &out_payload_len);
TEST_ASSERT_EQUAL_UINT8(static_cast<uint8_t>(LoraFrameDecodeStatus::LengthMismatch), static_cast<uint8_t>(short_len));
}
static void test_chunk_reassembly_in_order_success() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
const uint8_t payload[] = {1, 2, 3, 4, 5, 6, 7};
uint8_t buffer[32] = {};
uint16_t complete_len = 0;
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::InProgress),
static_cast<uint8_t>(batch_reassembly_push(state, 77, 0, 3, 7, &payload[0], 3, 1000, 5000, 32, buffer, sizeof(buffer), complete_len)));
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::InProgress),
static_cast<uint8_t>(batch_reassembly_push(state, 77, 1, 3, 7, &payload[3], 2, 1100, 5000, 32, buffer, sizeof(buffer), complete_len)));
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::Complete),
static_cast<uint8_t>(batch_reassembly_push(state, 77, 2, 3, 7, &payload[5], 2, 1200, 5000, 32, buffer, sizeof(buffer), complete_len)));
TEST_ASSERT_EQUAL_UINT16(7, complete_len);
TEST_ASSERT_FALSE(state.active);
TEST_ASSERT_EQUAL_UINT8_ARRAY(payload, buffer, sizeof(payload));
}
static void test_chunk_reassembly_missing_or_out_of_order_fails_deterministically() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
const uint8_t payload[] = {9, 8, 7, 6, 5, 4};
uint8_t buffer[32] = {};
uint16_t complete_len = 0;
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::InProgress),
static_cast<uint8_t>(batch_reassembly_push(state, 10, 0, 3, 6, &payload[0], 2, 1000, 5000, 32, buffer, sizeof(buffer), complete_len)));
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 10, 2, 3, 6, &payload[4], 2, 1100, 5000, 32, buffer, sizeof(buffer), complete_len)));
TEST_ASSERT_FALSE(state.active);
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 11, 1, 3, 6, &payload[2], 2, 1200, 5000, 32, buffer, sizeof(buffer), complete_len)));
}
static void test_chunk_reassembly_wrong_total_length_fails() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
const uint8_t payload[] = {1, 2, 3, 4, 5, 6};
uint8_t buffer[8] = {};
uint16_t complete_len = 0;
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::InProgress),
static_cast<uint8_t>(batch_reassembly_push(state, 55, 0, 2, 5, &payload[0], 3, 1000, 5000, 8, buffer, sizeof(buffer), complete_len)));
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 55, 1, 2, 5, &payload[3], 3, 1100, 5000, 8, buffer, sizeof(buffer), complete_len)));
TEST_ASSERT_FALSE(state.active);
}
void setup() {
UNITY_BEGIN();
RUN_TEST(test_crc16_known_vectors);
RUN_TEST(test_frame_encode_decode_and_crc_reject);
RUN_TEST(test_frame_rejects_invalid_msg_kind_and_short_length);
RUN_TEST(test_chunk_reassembly_in_order_success);
RUN_TEST(test_chunk_reassembly_missing_or_out_of_order_fails_deterministically);
RUN_TEST(test_chunk_reassembly_wrong_total_length_fails);
UNITY_END();
}
void loop() {}

View File

@@ -1,301 +0,0 @@
/**
* @file test_meter_fault_count.cpp
* @brief Unit test: verifies that the meter fault counter increments once per
* stale-data event, NOT once per catch-up tick.
*
* Regression test for the ~200 errors/hour bug where LoRa TX blocking caused
* the sampling catch-up loop to fire note_fault() for every missed 1s tick.
*
* Run on target with: pio test -e lilygo-t3-v1-6-1-test -f test_meter_fault_count
*/
#include <Arduino.h>
#include <unity.h>
#include "data_model.h"
// ---------- Minimal stubs replicating the fixed fault-counting logic ----------
static FaultCounters test_faults = {};
static FaultType test_last_error = FaultType::None;
static uint32_t test_last_error_utc = 0;
static uint32_t test_last_error_ms = 0;
static void note_fault_stub(FaultCounters &counters, FaultType &last_type,
uint32_t &last_ts_utc, uint32_t &last_ts_ms, FaultType type) {
if (type == FaultType::MeterRead) {
counters.meter_read_fail++;
} else if (type == FaultType::Decode) {
counters.decode_fail++;
} else if (type == FaultType::LoraTx) {
counters.lora_tx_fail++;
}
last_type = type;
last_ts_utc = millis() / 1000;
last_ts_ms = millis();
}
static void reset_test_faults() {
test_faults = {};
test_last_error = FaultType::None;
test_last_error_utc = 0;
test_last_error_ms = 0;
}
// ---------- Simulate the FIXED sampling loop logic ----------
static constexpr uint32_t SAMPLE_INTERVAL_MS = 1000;
/**
* Simulates the fixed sender_loop sampling section.
*
* @param last_sample_ms Tracks the last sample tick (in/out).
* @param now_ms Current millis().
* @param meter_ok Whether the meter snapshot is fresh.
* @param time_jump_pending Whether a time-jump event is pending (in/out).
* @param faults Fault counters (in/out).
* @return Number of samples generated in the catch-up loop.
*/
static uint32_t simulate_fixed_sampling(
uint32_t &last_sample_ms, uint32_t now_ms, bool meter_ok,
bool &time_jump_pending, FaultCounters &faults) {
FaultType last_error = FaultType::None;
uint32_t last_error_utc = 0;
uint32_t last_error_ms = 0;
bool meter_fault_noted = false;
// Time-jump: one fault per event, outside loop.
if (time_jump_pending) {
time_jump_pending = false;
note_fault_stub(faults, last_error, last_error_utc, last_error_ms, FaultType::MeterRead);
meter_fault_noted = true;
}
// Stale meter: one fault per contiguous stale period, outside loop.
if (!meter_ok && !meter_fault_noted) {
note_fault_stub(faults, last_error, last_error_utc, last_error_ms, FaultType::MeterRead);
}
uint32_t samples = 0;
while (now_ms - last_sample_ms >= SAMPLE_INTERVAL_MS) {
last_sample_ms += SAMPLE_INTERVAL_MS;
samples++;
}
return samples;
}
/**
* Simulates the OLD (buggy) sampling loop for comparison.
*/
static uint32_t simulate_buggy_sampling(
uint32_t &last_sample_ms, uint32_t now_ms, bool meter_ok,
bool &time_jump_pending, FaultCounters &faults) {
FaultType last_error = FaultType::None;
uint32_t last_error_utc = 0;
uint32_t last_error_ms = 0;
uint32_t samples = 0;
while (now_ms - last_sample_ms >= SAMPLE_INTERVAL_MS) {
last_sample_ms += SAMPLE_INTERVAL_MS;
samples++;
if (!meter_ok) {
note_fault_stub(faults, last_error, last_error_utc, last_error_ms, FaultType::MeterRead);
}
if (time_jump_pending) {
time_jump_pending = false;
note_fault_stub(faults, last_error, last_error_utc, last_error_ms, FaultType::MeterRead);
}
}
return samples;
}
// ---------- Tests ----------
/**
* Normal operation: meter is fresh, no blocking. 1 tick per call.
* Should produce 0 faults.
*/
static void test_no_fault_when_meter_fresh() {
FaultCounters faults = {};
uint32_t last_sample_ms = 0;
bool time_jump = false;
// Simulate 60 consecutive 1s ticks with fresh meter data.
for (int i = 1; i <= 60; i++) {
simulate_fixed_sampling(last_sample_ms, i * 1000, true, time_jump, faults);
}
TEST_ASSERT_EQUAL_UINT32(0, faults.meter_read_fail);
}
/**
* LoRa TX blocks for 10 seconds while meter is stale.
* OLD code: 10 faults. FIXED code: 1 fault.
*/
static void test_single_fault_after_blocking_stale() {
FaultCounters faults = {};
uint32_t last_sample_ms = 0;
bool time_jump = false;
// 5 normal ticks with fresh data.
for (int i = 1; i <= 5; i++) {
simulate_fixed_sampling(last_sample_ms, i * 1000, true, time_jump, faults);
}
TEST_ASSERT_EQUAL_UINT32(0, faults.meter_read_fail);
// LoRa TX blocks for 10s → meter goes stale.
// now_ms = 15000, last_sample_ms = 5000 → 10 catch-up ticks.
uint32_t samples = simulate_fixed_sampling(last_sample_ms, 15000, false, time_jump, faults);
TEST_ASSERT_EQUAL_UINT32(10, samples); // 10 ticks caught up.
TEST_ASSERT_EQUAL_UINT32(1, faults.meter_read_fail); // But only 1 fault!
}
/**
* Demonstrate the OLD buggy behavior: same scenario produces 10 faults.
*/
static void test_buggy_produces_many_faults() {
FaultCounters faults = {};
uint32_t last_sample_ms = 0;
bool time_jump = false;
for (int i = 1; i <= 5; i++) {
simulate_buggy_sampling(last_sample_ms, i * 1000, true, time_jump, faults);
}
TEST_ASSERT_EQUAL_UINT32(0, faults.meter_read_fail);
simulate_buggy_sampling(last_sample_ms, 15000, false, time_jump, faults);
TEST_ASSERT_EQUAL_UINT32(10, faults.meter_read_fail); // Buggy: 10 faults for one event.
}
/**
* Time-jump event should produce exactly 1 additional fault,
* regardless of how many ticks are caught up.
*/
static void test_time_jump_single_fault() {
FaultCounters faults = {};
uint32_t last_sample_ms = 0;
bool time_jump = true; // Pending time-jump.
// 8 catch-up ticks with stale meter AND time jump pending.
uint32_t samples = simulate_fixed_sampling(last_sample_ms, 8000, false, time_jump, faults);
TEST_ASSERT_EQUAL_UINT32(8, samples);
// Time jump counted as 1, stale suppressed because meter_fault_noted == true.
TEST_ASSERT_EQUAL_UINT32(1, faults.meter_read_fail);
TEST_ASSERT_FALSE(time_jump);
}
/**
* Repeated stale periods should count 1 fault per call to the sampling function,
* not 1 per tick. After 3600s at 1 call/s with meter stale every call,
* the FIXED code should produce ≤ 3600 faults (1 per call).
* The OLD code would produce the same number (since 1 tick per call).
* The difference is when blocking causes N>1 ticks per call.
*/
static void test_sustained_stale_1hz_no_blocking() {
FaultCounters faults = {};
uint32_t last_sample_ms = 0;
bool time_jump = false;
// Simulate 1 hour at 1 Hz with meter always stale (no blocking, 1 tick/call).
for (uint32_t i = 1; i <= 3600; i++) {
simulate_fixed_sampling(last_sample_ms, i * 1000, false, time_jump, faults);
}
// 1 fault per call = 3600 faults. This correctly reflects 3600 distinct evaluations
// where the meter was stale.
TEST_ASSERT_EQUAL_UINT32(3600, faults.meter_read_fail);
}
/**
* Worst-case: 1 hour, main loop blocked for 10s every 30s (batch TX + ACK).
* Each blocking event catches up 10 ticks with stale meter.
*
* OLD: 10 faults per blocking event × 120 blocks = 1200 faults,
* + 20 normal stale ticks between blocks × 120 = 2400 → total ~3600.
*
* FIXED: 1 fault per blocking event + 1 per non-blocked stale call.
* 120 blocking events + 2400 normal calls = 2520.
* (Still correctly counts each loop iteration where meter was stale.)
*/
static void test_periodic_blocking_reduces_faults() {
FaultCounters faults_fixed = {};
FaultCounters faults_buggy = {};
uint32_t last_fixed = 0;
uint32_t last_buggy = 0;
bool tj_fixed = false;
bool tj_buggy = false;
uint32_t t = 0;
for (int cycle = 0; cycle < 120; cycle++) {
// 20s of normal 1Hz polling, meter stale.
for (int s = 0; s < 20; s++) {
t += 1000;
simulate_fixed_sampling(last_fixed, t, false, tj_fixed, faults_fixed);
simulate_buggy_sampling(last_buggy, t, false, tj_buggy, faults_buggy);
}
// 10s blocking (LoRa TX + ACK), meter stale.
t += 10000;
simulate_fixed_sampling(last_fixed, t, false, tj_fixed, faults_fixed);
simulate_buggy_sampling(last_buggy, t, false, tj_buggy, faults_buggy);
}
// Both produce 3600 samples total.
// Buggy: 20*120 normal + 10*120 from catch-up = 3600 faults.
TEST_ASSERT_EQUAL_UINT32(3600, faults_buggy.meter_read_fail);
// Fixed: 20*120 normal + 1*120 from catch-up = 2520 faults.
TEST_ASSERT_EQUAL_UINT32(2520, faults_fixed.meter_read_fail);
// Significant reduction: fixed < buggy.
TEST_ASSERT_TRUE(faults_fixed.meter_read_fail < faults_buggy.meter_read_fail);
}
/**
* Real scenario: meter works fine most of the time; occasional 5-10s stale
* during LoRa TX. With fresh meter otherwise, faults should be minimal.
*
* 1h = 120 batch cycles of 30s.
* Each cycle: 20s meter OK → 10s TX blocking (stale) → continue.
* FIXED: 120 faults/h (one per TX stale event).
* OLD: ~1200 faults/h (10 per TX stale event).
*/
static void test_realistic_scenario_mostly_fresh() {
FaultCounters faults_fixed = {};
FaultCounters faults_buggy = {};
uint32_t last_fixed = 0;
uint32_t last_buggy = 0;
bool tj_fixed = false;
bool tj_buggy = false;
uint32_t t = 0;
for (int cycle = 0; cycle < 120; cycle++) {
// 20s of fresh meter data.
for (int s = 0; s < 20; s++) {
t += 1000;
simulate_fixed_sampling(last_fixed, t, true, tj_fixed, faults_fixed);
simulate_buggy_sampling(last_buggy, t, true, tj_buggy, faults_buggy);
}
// 10s LoRa blocking, meter goes stale.
t += 10000;
simulate_fixed_sampling(last_fixed, t, false, tj_fixed, faults_fixed);
simulate_buggy_sampling(last_buggy, t, false, tj_buggy, faults_buggy);
}
// Fixed: 0 faults during fresh + 1 per stale event = 120 faults/h.
TEST_ASSERT_EQUAL_UINT32(120, faults_fixed.meter_read_fail);
// Buggy: 0 faults during fresh + 10 per stale event = 1200 faults/h.
TEST_ASSERT_EQUAL_UINT32(1200, faults_buggy.meter_read_fail);
}
void setup() {
UNITY_BEGIN();
RUN_TEST(test_no_fault_when_meter_fresh);
RUN_TEST(test_single_fault_after_blocking_stale);
RUN_TEST(test_buggy_produces_many_faults);
RUN_TEST(test_time_jump_single_fault);
RUN_TEST(test_sustained_stale_1hz_no_blocking);
RUN_TEST(test_periodic_blocking_reduces_faults);
RUN_TEST(test_realistic_scenario_mostly_fresh);
UNITY_END();
}
void loop() {}

View File

@@ -1,279 +0,0 @@
#include <Arduino.h>
#include <unity.h>
#include "dd3_legacy_core.h"
#include "payload_codec.h"
static constexpr uint8_t kMaxSamples = 30;
static void fill_sparse_batch(BatchInput &in) {
memset(&in, 0, sizeof(in));
in.sender_id = 1;
in.batch_id = 42;
in.t_last = 1700000000;
in.present_mask = (1UL << 0) | (1UL << 2) | (1UL << 3) | (1UL << 10) | (1UL << 29);
in.n = 5;
in.battery_mV = 3750;
in.err_m = 2;
in.err_d = 1;
in.err_tx = 3;
in.err_last = 2;
in.err_rx_reject = 1;
in.energy_wh[0] = 100000;
in.energy_wh[1] = 100001;
in.energy_wh[2] = 100050;
in.energy_wh[3] = 100050;
in.energy_wh[4] = 100200;
in.p1_w[0] = -120;
in.p1_w[1] = -90;
in.p1_w[2] = 1910;
in.p1_w[3] = -90;
in.p1_w[4] = 500;
in.p2_w[0] = 50;
in.p2_w[1] = -1950;
in.p2_w[2] = 60;
in.p2_w[3] = 2060;
in.p2_w[4] = -10;
in.p3_w[0] = 0;
in.p3_w[1] = 10;
in.p3_w[2] = -1990;
in.p3_w[3] = 10;
in.p3_w[4] = 20;
}
static void fill_full_batch(BatchInput &in) {
memset(&in, 0, sizeof(in));
in.sender_id = 1;
in.batch_id = 0xBEEF;
in.t_last = 1769904999;
in.present_mask = 0x3FFFFFFFUL;
in.n = kMaxSamples;
in.battery_mV = 4095;
in.err_m = 10;
in.err_d = 20;
in.err_tx = 30;
in.err_last = 3;
in.err_rx_reject = 6;
for (uint8_t i = 0; i < kMaxSamples; ++i) {
in.energy_wh[i] = 500000UL + static_cast<uint32_t>(i) * static_cast<uint32_t>(i) * 3UL;
in.p1_w[i] = static_cast<int16_t>(-1000 + static_cast<int16_t>(i) * 25);
in.p2_w[i] = static_cast<int16_t>(500 - static_cast<int16_t>(i) * 30);
in.p3_w[i] = static_cast<int16_t>(((i % 2) == 0 ? 100 : -100) + static_cast<int16_t>(i) * 5);
}
}
static void assert_batch_equals(const BatchInput &expected, const BatchInput &actual) {
TEST_ASSERT_EQUAL_UINT16(expected.sender_id, actual.sender_id);
TEST_ASSERT_EQUAL_UINT16(expected.batch_id, actual.batch_id);
TEST_ASSERT_EQUAL_UINT32(expected.t_last, actual.t_last);
TEST_ASSERT_EQUAL_UINT32(expected.present_mask, actual.present_mask);
TEST_ASSERT_EQUAL_UINT8(expected.n, actual.n);
TEST_ASSERT_EQUAL_UINT16(expected.battery_mV, actual.battery_mV);
TEST_ASSERT_EQUAL_UINT8(expected.err_m, actual.err_m);
TEST_ASSERT_EQUAL_UINT8(expected.err_d, actual.err_d);
TEST_ASSERT_EQUAL_UINT8(expected.err_tx, actual.err_tx);
TEST_ASSERT_EQUAL_UINT8(expected.err_last, actual.err_last);
TEST_ASSERT_EQUAL_UINT8(expected.err_rx_reject, actual.err_rx_reject);
for (uint8_t i = 0; i < expected.n; ++i) {
TEST_ASSERT_EQUAL_UINT32(expected.energy_wh[i], actual.energy_wh[i]);
TEST_ASSERT_EQUAL_INT16(expected.p1_w[i], actual.p1_w[i]);
TEST_ASSERT_EQUAL_INT16(expected.p2_w[i], actual.p2_w[i]);
TEST_ASSERT_EQUAL_INT16(expected.p3_w[i], actual.p3_w[i]);
}
for (uint8_t i = expected.n; i < kMaxSamples; ++i) {
TEST_ASSERT_EQUAL_UINT32(0, actual.energy_wh[i]);
TEST_ASSERT_EQUAL_INT16(0, actual.p1_w[i]);
TEST_ASSERT_EQUAL_INT16(0, actual.p2_w[i]);
TEST_ASSERT_EQUAL_INT16(0, actual.p3_w[i]);
}
}
static void test_encode_decode_roundtrip_schema_v3() {
BatchInput in = {};
fill_sparse_batch(in);
uint8_t encoded[256] = {};
size_t encoded_len = 0;
TEST_ASSERT_TRUE(encode_batch(in, encoded, sizeof(encoded), &encoded_len));
TEST_ASSERT_TRUE(encoded_len > 24);
BatchInput out = {};
TEST_ASSERT_TRUE(decode_batch(encoded, encoded_len, &out));
assert_batch_equals(in, out);
}
static void test_decode_rejects_bad_magic_schema_flags() {
BatchInput in = {};
fill_sparse_batch(in);
uint8_t encoded[256] = {};
size_t encoded_len = 0;
TEST_ASSERT_TRUE(encode_batch(in, encoded, sizeof(encoded), &encoded_len));
BatchInput out = {};
uint8_t bad_magic[256] = {};
memcpy(bad_magic, encoded, encoded_len);
bad_magic[0] = 0x00;
TEST_ASSERT_FALSE(decode_batch(bad_magic, encoded_len, &out));
uint8_t bad_schema[256] = {};
memcpy(bad_schema, encoded, encoded_len);
bad_schema[2] = 0x02;
TEST_ASSERT_FALSE(decode_batch(bad_schema, encoded_len, &out));
uint8_t bad_flags[256] = {};
memcpy(bad_flags, encoded, encoded_len);
bad_flags[3] = 0x00;
TEST_ASSERT_FALSE(decode_batch(bad_flags, encoded_len, &out));
}
static void test_decode_rejects_truncated_and_length_mismatch() {
BatchInput in = {};
fill_sparse_batch(in);
uint8_t encoded[256] = {};
size_t encoded_len = 0;
TEST_ASSERT_TRUE(encode_batch(in, encoded, sizeof(encoded), &encoded_len));
BatchInput out = {};
TEST_ASSERT_FALSE(decode_batch(encoded, encoded_len - 1, &out));
TEST_ASSERT_FALSE(decode_batch(encoded, 12, &out));
uint8_t with_tail[257] = {};
memcpy(with_tail, encoded, encoded_len);
with_tail[encoded_len] = 0xAA;
TEST_ASSERT_FALSE(decode_batch(with_tail, encoded_len + 1, &out));
}
static void test_encode_and_decode_reject_invalid_present_mask() {
BatchInput in = {};
fill_sparse_batch(in);
uint8_t encoded[256] = {};
size_t encoded_len = 0;
in.present_mask = 0x40000000UL;
TEST_ASSERT_FALSE(encode_batch(in, encoded, sizeof(encoded), &encoded_len));
fill_sparse_batch(in);
TEST_ASSERT_TRUE(encode_batch(in, encoded, sizeof(encoded), &encoded_len));
BatchInput out = {};
uint8_t invalid_bits[256] = {};
memcpy(invalid_bits, encoded, encoded_len);
invalid_bits[15] |= 0x40;
TEST_ASSERT_FALSE(decode_batch(invalid_bits, encoded_len, &out));
uint8_t bitcount_mismatch[256] = {};
memcpy(bitcount_mismatch, encoded, encoded_len);
bitcount_mismatch[16] = 0x01; // n=1 while mask has 5 bits set
TEST_ASSERT_FALSE(decode_batch(bitcount_mismatch, encoded_len, &out));
}
static void test_encode_rejects_invalid_n_and_regression_cases() {
BatchInput in = {};
fill_sparse_batch(in);
uint8_t encoded[256] = {};
size_t encoded_len = 0;
in.n = 31;
TEST_ASSERT_FALSE(encode_batch(in, encoded, sizeof(encoded), &encoded_len));
fill_sparse_batch(in);
in.n = 0;
in.present_mask = 1;
TEST_ASSERT_FALSE(encode_batch(in, encoded, sizeof(encoded), &encoded_len));
fill_sparse_batch(in);
in.n = 2;
in.present_mask = 0x00000003UL;
in.energy_wh[1] = in.energy_wh[0] - 1;
TEST_ASSERT_FALSE(encode_batch(in, encoded, sizeof(encoded), &encoded_len));
fill_sparse_batch(in);
TEST_ASSERT_FALSE(encode_batch(in, encoded, 10, &encoded_len));
}
static const uint8_t VECTOR_SYNC_EMPTY[] = {
0xB3, 0xDD, 0x03, 0x01, 0x01, 0x00, 0x34, 0x12, 0xE4, 0x97, 0x7E, 0x69, 0x00, 0x00, 0x00, 0x00, 0x00, 0xA6, 0x0E,
0x00, 0x00, 0x00, 0x00, 0x00};
static const uint8_t VECTOR_SPARSE_5[] = {
0xB3, 0xDD, 0x03, 0x01, 0x01, 0x00, 0x2A, 0x00, 0x00, 0xF1, 0x53, 0x65, 0x0D, 0x04, 0x00, 0x20, 0x05, 0xA6, 0x0E,
0x02, 0x01, 0x03, 0x02, 0x01, 0xA0, 0x86, 0x01, 0x00, 0x01, 0x31, 0x00, 0x96, 0x01, 0x88, 0xFF, 0x3C, 0xA0, 0x1F,
0x9F, 0x1F, 0x9C, 0x09, 0x32, 0x00, 0x9F, 0x1F, 0xB4, 0x1F, 0xA0, 0x1F, 0xAB, 0x20, 0x00, 0x00, 0x14, 0x9F, 0x1F,
0xA0, 0x1F, 0x14};
static const uint8_t VECTOR_FULL_30[] = {
0xB3, 0xDD, 0x03, 0x01, 0x01, 0x00, 0xEF, 0xBE, 0x67, 0x9B, 0x7E, 0x69, 0xFF, 0xFF, 0xFF, 0x3F, 0x1E, 0xFF, 0x0F,
0x0A, 0x14, 0x1E, 0x03, 0x06, 0x20, 0xA1, 0x07, 0x00, 0x03, 0x09, 0x0F, 0x15, 0x1B, 0x21, 0x27, 0x2D, 0x33, 0x39,
0x3F, 0x45, 0x4B, 0x51, 0x57, 0x5D, 0x63, 0x69, 0x6F, 0x75, 0x7B, 0x81, 0x01, 0x87, 0x01, 0x8D, 0x01, 0x93, 0x01,
0x99, 0x01, 0x9F, 0x01, 0xA5, 0x01, 0xAB, 0x01, 0x18, 0xFC, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32,
0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32, 0x32,
0x32, 0xF4, 0x01, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B,
0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x3B, 0x64, 0x00, 0x85, 0x03, 0x9A, 0x03,
0x85, 0x03, 0x9A, 0x03, 0x85, 0x03, 0x9A, 0x03, 0x85, 0x03, 0x9A, 0x03, 0x85, 0x03, 0x9A, 0x03, 0x85, 0x03, 0x9A,
0x03, 0x85, 0x03, 0x9A, 0x03, 0x85, 0x03, 0x9A, 0x03, 0x85, 0x03, 0x9A, 0x03, 0x85, 0x03, 0x9A, 0x03, 0x85, 0x03,
0x9A, 0x03, 0x85, 0x03, 0x9A, 0x03, 0x85, 0x03, 0x9A, 0x03, 0x85, 0x03};
static void test_payload_golden_vectors() {
BatchInput expected_sync = {};
expected_sync.sender_id = 1;
expected_sync.batch_id = 0x1234;
expected_sync.t_last = 1769904100;
expected_sync.present_mask = 0;
expected_sync.n = 0;
expected_sync.battery_mV = 3750;
expected_sync.err_m = 0;
expected_sync.err_d = 0;
expected_sync.err_tx = 0;
expected_sync.err_last = 0;
expected_sync.err_rx_reject = 0;
BatchInput expected_sparse = {};
fill_sparse_batch(expected_sparse);
BatchInput expected_full = {};
fill_full_batch(expected_full);
struct VectorCase {
const char *name;
const uint8_t *bytes;
size_t len;
const BatchInput *expected;
} cases[] = {
{"sync_empty", VECTOR_SYNC_EMPTY, sizeof(VECTOR_SYNC_EMPTY), &expected_sync},
{"sparse_5", VECTOR_SPARSE_5, sizeof(VECTOR_SPARSE_5), &expected_sparse},
{"full_30", VECTOR_FULL_30, sizeof(VECTOR_FULL_30), &expected_full},
};
for (size_t i = 0; i < (sizeof(cases) / sizeof(cases[0])); ++i) {
BatchInput decoded = {};
TEST_ASSERT_TRUE_MESSAGE(decode_batch(cases[i].bytes, cases[i].len, &decoded), cases[i].name);
assert_batch_equals(*cases[i].expected, decoded);
uint8_t reencoded[512] = {};
size_t reencoded_len = 0;
TEST_ASSERT_TRUE_MESSAGE(encode_batch(*cases[i].expected, reencoded, sizeof(reencoded), &reencoded_len), cases[i].name);
TEST_ASSERT_EQUAL_UINT_MESSAGE(cases[i].len, reencoded_len, cases[i].name);
TEST_ASSERT_EQUAL_UINT8_ARRAY_MESSAGE(cases[i].bytes, reencoded, cases[i].len, cases[i].name);
}
}
void setup() {
dd3_legacy_core_force_link();
UNITY_BEGIN();
RUN_TEST(test_encode_decode_roundtrip_schema_v3);
RUN_TEST(test_decode_rejects_bad_magic_schema_flags);
RUN_TEST(test_decode_rejects_truncated_and_length_mismatch);
RUN_TEST(test_encode_and_decode_reject_invalid_present_mask);
RUN_TEST(test_encode_rejects_invalid_n_and_regression_cases);
RUN_TEST(test_payload_golden_vectors);
UNITY_END();
}
void loop() {}

View File

@@ -1,41 +0,0 @@
#include <Arduino.h>
#include <unity.h>
#include "app_context.h"
#include "receiver_pipeline.h"
#include "sender_state_machine.h"
#include "config.h"
static void test_refactor_headers_and_types() {
SenderStateMachineConfig sender_cfg = {};
sender_cfg.short_id = 0xF19C;
sender_cfg.device_id = "dd3-F19C";
ReceiverSharedState shared = {};
ReceiverPipelineConfig receiver_cfg = {};
receiver_cfg.short_id = 0xF19C;
receiver_cfg.device_id = "dd3-F19C";
receiver_cfg.shared = &shared;
SenderStateMachine sender_sm;
ReceiverPipeline receiver_pipe;
TEST_ASSERT_EQUAL_UINT16(0xF19C, sender_cfg.short_id);
TEST_ASSERT_NOT_NULL(receiver_cfg.shared);
(void)sender_sm;
(void)receiver_pipe;
}
static void test_ha_manufacturer_constant() {
TEST_ASSERT_EQUAL_STRING("AcidBurns", HA_MANUFACTURER);
}
void setup() {
UNITY_BEGIN();
RUN_TEST(test_refactor_headers_and_types);
RUN_TEST(test_ha_manufacturer_constant);
UNITY_END();
}
void loop() {}

View File

@@ -1,406 +0,0 @@
#include <Arduino.h>
#include <unity.h>
#include "dd3_legacy_core.h"
#include "payload_codec.h"
#include "lora_frame_logic.h"
#include "batch_reassembly_logic.h"
// ===========================================================================
// Fuzz / negative tests for parser entry points (frame, ACK, payload codec,
// batch reassembly). Goal: every malformed input must be rejected without
// crash, OOB read/write, or undefined behaviour.
// ===========================================================================
// ---- decode_batch: negative / boundary tests ----
static void test_decode_batch_null_args() {
uint8_t dummy[32] = {};
BatchInput out = {};
TEST_ASSERT_FALSE(decode_batch(nullptr, 24, &out));
TEST_ASSERT_FALSE(decode_batch(dummy, 24, nullptr));
TEST_ASSERT_FALSE(decode_batch(nullptr, 0, nullptr));
}
static void test_decode_batch_zero_length() {
uint8_t dummy[1] = {0};
BatchInput out = {};
TEST_ASSERT_FALSE(decode_batch(dummy, 0, &out));
}
static void test_decode_batch_minimal_valid_sync() {
// Sync-only (n=0) payload: 24 bytes header, no samples.
uint8_t buf[24] = {};
// magic 0xDDB3 LE
buf[0] = 0xB3; buf[1] = 0xDD;
buf[2] = 3; // schema
buf[3] = 0x01; // flags
// sender_id=1
buf[4] = 0x01; buf[5] = 0x00;
// batch_id=1
buf[6] = 0x01; buf[7] = 0x00;
// t_last=1769904000 LE
uint32_t t = 1769904000UL;
buf[8] = t & 0xFF; buf[9] = (t >> 8) & 0xFF;
buf[10] = (t >> 16) & 0xFF; buf[11] = (t >> 24) & 0xFF;
// present_mask=0
buf[12] = 0; buf[13] = 0; buf[14] = 0; buf[15] = 0;
// n=0
buf[16] = 0;
// battery_mV=3750 LE
buf[17] = 0xA6; buf[18] = 0x0E;
// err fields
buf[19] = 0; buf[20] = 0; buf[21] = 0; buf[22] = 0; buf[23] = 0;
BatchInput out = {};
TEST_ASSERT_TRUE(decode_batch(buf, 24, &out));
TEST_ASSERT_EQUAL_UINT8(0, out.n);
TEST_ASSERT_EQUAL_UINT32(0, out.present_mask);
}
static void test_decode_batch_n_exceeds_30() {
// Forge a header with n=31, which should be rejected.
uint8_t buf[24] = {};
buf[0] = 0xB3; buf[1] = 0xDD;
buf[2] = 3; buf[3] = 0x01;
buf[4] = 0x01; buf[5] = 0x00;
buf[6] = 0x01; buf[7] = 0x00;
uint32_t t = 1769904000UL;
buf[8] = t & 0xFF; buf[9] = (t >> 8) & 0xFF;
buf[10] = (t >> 16) & 0xFF; buf[11] = (t >> 24) & 0xFF;
buf[12] = 0xFF; buf[13] = 0xFF; buf[14] = 0xFF; buf[15] = 0x3F; // all 30 bits set
buf[16] = 31; // n=31 → must reject
buf[17] = 0xA6; buf[18] = 0x0E;
buf[19] = 0; buf[20] = 0; buf[21] = 0; buf[22] = 0; buf[23] = 0;
BatchInput out = {};
TEST_ASSERT_FALSE(decode_batch(buf, 24, &out));
}
static void test_decode_batch_present_mask_n_mismatch() {
// present_mask has 3 bits but n=5 → must reject.
uint8_t buf[24] = {};
buf[0] = 0xB3; buf[1] = 0xDD;
buf[2] = 3; buf[3] = 0x01;
buf[4] = 0x01; buf[5] = 0x00;
buf[6] = 0x01; buf[7] = 0x00;
uint32_t t = 1769904000UL;
buf[8] = t & 0xFF; buf[9] = (t >> 8) & 0xFF;
buf[10] = (t >> 16) & 0xFF; buf[11] = (t >> 24) & 0xFF;
buf[12] = 0x07; buf[13] = 0; buf[14] = 0; buf[15] = 0; // 3 bits
buf[16] = 5; // n=5 but only 3 mask bits
buf[17] = 0xA6; buf[18] = 0x0E;
buf[19] = 0; buf[20] = 0; buf[21] = 0; buf[22] = 0; buf[23] = 0;
BatchInput out = {};
TEST_ASSERT_FALSE(decode_batch(buf, 24, &out));
}
static void test_decode_batch_reserved_mask_bits() {
// Bit 30 or 31 set → must reject (only bits 0-29 valid).
uint8_t buf[24] = {};
buf[0] = 0xB3; buf[1] = 0xDD;
buf[2] = 3; buf[3] = 0x01;
buf[4] = 0x01; buf[5] = 0x00;
buf[6] = 0x01; buf[7] = 0x00;
uint32_t t = 1769904000UL;
buf[8] = t & 0xFF; buf[9] = (t >> 8) & 0xFF;
buf[10] = (t >> 16) & 0xFF; buf[11] = (t >> 24) & 0xFF;
buf[12] = 0x01; buf[13] = 0; buf[14] = 0; buf[15] = 0x40; // bit 30
buf[16] = 1;
buf[17] = 0xA6; buf[18] = 0x0E;
buf[19] = 0; buf[20] = 0; buf[21] = 0; buf[22] = 0; buf[23] = 0;
BatchInput out = {};
TEST_ASSERT_FALSE(decode_batch(buf, 24, &out));
}
// ---- uleb128_decode: negative tests ----
static void test_uleb128_decode_unterminated() {
// 5 continuation bytes without termination → reject.
uint8_t data[] = {0x80, 0x80, 0x80, 0x80, 0x80};
size_t pos = 0;
uint32_t val = 0;
TEST_ASSERT_FALSE(uleb128_decode(data, sizeof(data), &pos, &val));
}
static void test_uleb128_decode_overflow() {
// 5th byte has bits in upper nibble → overflow.
uint8_t data[] = {0x80, 0x80, 0x80, 0x80, 0x10};
size_t pos = 0;
uint32_t val = 0;
TEST_ASSERT_FALSE(uleb128_decode(data, sizeof(data), &pos, &val));
}
static void test_uleb128_decode_null_args() {
size_t pos = 0;
uint32_t val = 0;
uint8_t data[] = {0x00};
TEST_ASSERT_FALSE(uleb128_decode(nullptr, 1, &pos, &val));
TEST_ASSERT_FALSE(uleb128_decode(data, 1, nullptr, &val));
TEST_ASSERT_FALSE(uleb128_decode(data, 1, &pos, nullptr));
}
static void test_uleb128_decode_empty_buffer() {
size_t pos = 0;
uint32_t val = 0;
uint8_t data[1] = {};
TEST_ASSERT_FALSE(uleb128_decode(data, 0, &pos, &val));
}
// ---- svarint_decode: negative tests ----
static void test_svarint_decode_overflow() {
// The underlying uleb128 overflows
uint8_t data[] = {0x80, 0x80, 0x80, 0x80, 0x10};
size_t pos = 0;
int32_t val = 0;
TEST_ASSERT_FALSE(svarint_decode(data, sizeof(data), &pos, &val));
}
// ---- lora_parse_frame: fuzz seeds ----
static void test_frame_parse_all_zeros() {
uint8_t buf[5] = {0, 0, 0, 0, 0};
uint8_t kind = 0xFF;
uint16_t dev = 0xFFFF;
uint8_t payload[16] = {};
size_t plen = 0;
// All-zero frame: CRC of first 3 bytes won't match last 2 → CrcFail.
LoraFrameDecodeStatus s = lora_parse_frame(buf, sizeof(buf), 1, &kind, &dev, payload, sizeof(payload), &plen);
TEST_ASSERT_TRUE(s == LoraFrameDecodeStatus::CrcFail || s == LoraFrameDecodeStatus::Ok);
}
static void test_frame_parse_max_msg_kind_reject() {
// Build valid frame with msg_kind=2, then parse with max_msg_kind=1.
uint8_t payload[] = {0x42};
uint8_t frame[32] = {};
size_t flen = 0;
TEST_ASSERT_TRUE(lora_build_frame(2, 0xABCD, payload, 1, frame, sizeof(frame), flen));
uint8_t kind = 0;
uint16_t dev = 0;
uint8_t out[8] = {};
size_t olen = 0;
LoraFrameDecodeStatus s = lora_parse_frame(frame, flen, 1, &kind, &dev, out, sizeof(out), &olen);
TEST_ASSERT_EQUAL_UINT8(static_cast<uint8_t>(LoraFrameDecodeStatus::InvalidMsgKind), static_cast<uint8_t>(s));
}
static void test_frame_parse_payload_too_large_for_output() {
// Build valid frame with 4 bytes payload, parse into 2-byte output → LengthMismatch.
uint8_t payload[] = {1, 2, 3, 4};
uint8_t frame[32] = {};
size_t flen = 0;
TEST_ASSERT_TRUE(lora_build_frame(0, 0x1234, payload, 4, frame, sizeof(frame), flen));
uint8_t kind = 0;
uint16_t dev = 0;
uint8_t out[2] = {};
size_t olen = 0;
LoraFrameDecodeStatus s = lora_parse_frame(frame, flen, 1, &kind, &dev, out, sizeof(out), &olen);
TEST_ASSERT_EQUAL_UINT8(static_cast<uint8_t>(LoraFrameDecodeStatus::LengthMismatch), static_cast<uint8_t>(s));
}
static void test_frame_build_null_args() {
uint8_t buf[32] = {};
size_t len = 0;
TEST_ASSERT_FALSE(lora_build_frame(0, 0, nullptr, 5, buf, sizeof(buf), len));
TEST_ASSERT_FALSE(lora_build_frame(0, 0, buf, 0, nullptr, sizeof(buf), len));
}
// ---- batch_reassembly: negative / abuse tests ----
static void test_reassembly_null_buffer() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
uint8_t chunk[] = {1, 2, 3};
uint16_t clen = 0;
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 1, 0, 1, 3, chunk, 3, 100, 5000, 64, nullptr, 0, clen)));
}
static void test_reassembly_null_chunk_data() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
uint8_t buffer[32] = {};
uint16_t clen = 0;
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 1, 0, 1, 3, nullptr, 3, 100, 5000, 64, buffer, sizeof(buffer), clen)));
}
static void test_reassembly_total_len_zero_with_data() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
uint8_t buffer[32] = {};
uint8_t chunk[] = {1};
uint16_t clen = 0;
// total_len=0 but chunk_len>0 → must reject.
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 1, 0, 1, 0, chunk, 1, 100, 5000, 64, buffer, sizeof(buffer), clen)));
}
static void test_reassembly_total_len_exceeds_max() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
uint8_t buffer[32] = {};
uint8_t chunk[] = {1};
uint16_t clen = 0;
// total_len=5000 > max_total_len=64 → must reject.
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 1, 0, 1, 5000, chunk, 1, 100, 5000, 64, buffer, sizeof(buffer), clen)));
}
static void test_reassembly_timeout_resets() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
uint8_t buffer[32] = {};
uint8_t chunk1[] = {1, 2};
uint8_t chunk2[] = {3};
uint16_t clen = 0;
// First chunk at t=1000.
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::InProgress),
static_cast<uint8_t>(batch_reassembly_push(state, 10, 0, 2, 3, chunk1, 2, 1000, 500, 32, buffer, sizeof(buffer), clen)));
// Second chunk at t=2000 (>500ms after last) → timeout → ErrorReset.
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 10, 1, 2, 3, chunk2, 1, 2000, 500, 32, buffer, sizeof(buffer), clen)));
}
static void test_reassembly_different_batch_id_resets() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
uint8_t buffer[32] = {};
uint8_t chunk[] = {1, 2};
uint16_t clen = 0;
// Start batch 10.
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::InProgress),
static_cast<uint8_t>(batch_reassembly_push(state, 10, 0, 2, 3, chunk, 2, 100, 5000, 32, buffer, sizeof(buffer), clen)));
// Receive chunk for batch 11 (different), but index=1 → ErrorReset (non-zero index for new batch).
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 11, 1, 2, 3, chunk, 1, 200, 5000, 32, buffer, sizeof(buffer), clen)));
}
static void test_reassembly_overflow_buffer() {
BatchReassemblyState state = {};
batch_reassembly_reset(state);
uint8_t buffer[4] = {};
uint8_t chunk[] = {1, 2, 3, 4, 5};
uint16_t clen = 0;
// total_len=5 but buffer_cap=4 → chunk overflows buffer → ErrorReset.
TEST_ASSERT_EQUAL_UINT8(
static_cast<uint8_t>(BatchReassemblyStatus::ErrorReset),
static_cast<uint8_t>(batch_reassembly_push(state, 1, 0, 1, 5, chunk, 5, 100, 5000, 64, buffer, sizeof(buffer), clen)));
}
// ---- Byte-flip fuzz of a valid encoded payload ----
static void test_decode_batch_byte_flip_fuzz() {
// Encode a valid batch, then flip each byte and ensure decode either
// returns false or produces a valid output (no crash, no UB).
BatchInput in = {};
in.sender_id = 1;
in.batch_id = 42;
in.t_last = 1769904000UL;
in.present_mask = 0x07; // bits 0-2
in.n = 3;
in.battery_mV = 3750;
in.energy_wh[0] = 100000;
in.energy_wh[1] = 100010;
in.energy_wh[2] = 100020;
in.p1_w[0] = 100; in.p1_w[1] = 110; in.p1_w[2] = 120;
in.p2_w[0] = 200; in.p2_w[1] = 210; in.p2_w[2] = 220;
in.p3_w[0] = 300; in.p3_w[1] = 310; in.p3_w[2] = 320;
uint8_t encoded[256] = {};
size_t encoded_len = 0;
TEST_ASSERT_TRUE(encode_batch(in, encoded, sizeof(encoded), &encoded_len));
TEST_ASSERT_TRUE(encoded_len > 0);
for (size_t i = 0; i < encoded_len; ++i) {
uint8_t mutated[256];
memcpy(mutated, encoded, encoded_len);
mutated[i] ^= 0xFF; // flip all bits of byte i
BatchInput out = {};
// Must not crash. Return value may be true (if flip is benign) or false.
(void)decode_batch(mutated, encoded_len, &out);
}
// Verify original still decodes correctly.
BatchInput verify = {};
TEST_ASSERT_TRUE(decode_batch(encoded, encoded_len, &verify));
TEST_ASSERT_EQUAL_UINT8(in.n, verify.n);
}
// ---- lora_parse_frame byte-flip ----
static void test_frame_byte_flip_fuzz() {
uint8_t payload[] = {0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07};
uint8_t frame[32] = {};
size_t frame_len = 0;
TEST_ASSERT_TRUE(lora_build_frame(1, 0xF19C, payload, sizeof(payload), frame, sizeof(frame), frame_len));
for (size_t i = 0; i < frame_len; ++i) {
uint8_t mutated[32];
memcpy(mutated, frame, frame_len);
mutated[i] ^= 0xFF;
uint8_t kind = 0;
uint16_t dev = 0;
uint8_t out[16] = {};
size_t olen = 0;
// Must not crash.
(void)lora_parse_frame(mutated, frame_len, 1, &kind, &dev, out, sizeof(out), &olen);
}
}
void setup() {
dd3_legacy_core_force_link();
UNITY_BEGIN();
// decode_batch negative tests
RUN_TEST(test_decode_batch_null_args);
RUN_TEST(test_decode_batch_zero_length);
RUN_TEST(test_decode_batch_minimal_valid_sync);
RUN_TEST(test_decode_batch_n_exceeds_30);
RUN_TEST(test_decode_batch_present_mask_n_mismatch);
RUN_TEST(test_decode_batch_reserved_mask_bits);
// uleb128 / svarint negative tests
RUN_TEST(test_uleb128_decode_unterminated);
RUN_TEST(test_uleb128_decode_overflow);
RUN_TEST(test_uleb128_decode_null_args);
RUN_TEST(test_uleb128_decode_empty_buffer);
RUN_TEST(test_svarint_decode_overflow);
// lora_parse_frame negative tests
RUN_TEST(test_frame_parse_all_zeros);
RUN_TEST(test_frame_parse_max_msg_kind_reject);
RUN_TEST(test_frame_parse_payload_too_large_for_output);
RUN_TEST(test_frame_build_null_args);
// batch_reassembly negative tests
RUN_TEST(test_reassembly_null_buffer);
RUN_TEST(test_reassembly_null_chunk_data);
RUN_TEST(test_reassembly_total_len_zero_with_data);
RUN_TEST(test_reassembly_total_len_exceeds_max);
RUN_TEST(test_reassembly_timeout_resets);
RUN_TEST(test_reassembly_different_batch_id_resets);
RUN_TEST(test_reassembly_overflow_buffer);
// Byte-flip fuzz tests
RUN_TEST(test_decode_batch_byte_flip_fuzz);
RUN_TEST(test_frame_byte_flip_fuzz);
UNITY_END();
}
void loop() {}

View File

@@ -1,264 +0,0 @@
#!/usr/bin/env python3
"""
Compatibility test for republish_mqtt.py and republish_mqtt_gui.py
Tests against newest CSV and InfluxDB formats
"""
import csv
import json
import tempfile
import sys
from pathlib import Path
from datetime import datetime, timedelta
def test_csv_format_current():
"""Test that scripts can parse the CURRENT SD logger CSV format (ts_hms_local)"""
print("\n=== TEST 1: CSV Format (Current HD logger) ===")
# Current format from sd_logger.cpp line 105:
# ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last
csv_header = "ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last"
csv_data = "1710076800,08:00:00,5432,1800,1816,1816,1234.567,4.15,95,-95,9.25,0,0,0,"
with tempfile.NamedTemporaryFile(mode='w', suffix='.csv', delete=False, newline='') as f:
f.write(csv_header + '\n')
f.write(csv_data + '\n')
csv_file = f.name
try:
# Parse like the republish script does
with open(csv_file, 'r') as f:
reader = csv.DictReader(f)
fieldnames = reader.fieldnames
# Check required fields
required = ['ts_utc', 'e_kwh', 'p_w']
missing = [field for field in required if field not in fieldnames]
if missing:
print(f"❌ FAIL: Missing required fields: {missing}")
return False
# Check optional fields that scripts handle
optional_handled = ['p1_w', 'p2_w', 'p3_w', 'bat_v', 'bat_pct', 'rssi', 'snr']
present_optional = [f for f in optional_handled if f in fieldnames]
print(f"✓ Required fields: {required}")
print(f"✓ Optional fields found: {present_optional}")
# Try parsing first row
for row in reader:
try:
ts_utc = int(row['ts_utc'])
e_kwh = float(row['e_kwh'])
p_w = int(round(float(row['p_w'])))
print(f"✓ Parsed sample: ts={ts_utc}, e_kwh={e_kwh:.2f}, p_w={p_w}W")
return True
except (ValueError, KeyError) as e:
print(f"❌ FAIL: Could not parse row: {e}")
return False
finally:
Path(csv_file).unlink()
def test_csv_format_with_new_fields():
"""Test that scripts gracefully handle new CSV fields (rx_reject, etc)"""
print("\n=== TEST 2: CSV Format with Future Fields ===")
# Hypothetical future format with additional fields
csv_header = "ts_utc,ts_hms_local,p_w,p1_w,p2_w,p3_w,e_kwh,bat_v,bat_pct,rssi,snr,err_m,err_d,err_tx,err_last,rx_reject,rx_reject_text"
csv_data = "1710076800,08:00:00,5432,1800,1816,1816,1234.567,4.15,95,-95,9.25,0,0,0,,0,none"
with tempfile.NamedTemporaryFile(mode='w', suffix='.csv', delete=False, newline='') as f:
f.write(csv_header + '\n')
f.write(csv_data + '\n')
csv_file = f.name
try:
with open(csv_file, 'r') as f:
reader = csv.DictReader(f)
fieldnames = reader.fieldnames
# Check required fields
required = ['ts_utc', 'e_kwh', 'p_w']
missing = [field for field in required if field not in fieldnames]
if missing:
print(f"❌ FAIL: Missing required fields: {missing}")
return False
print(f"✓ All required fields present: {required}")
print(f"✓ Total fields in format: {len(fieldnames)}")
print(f" - New field 'rx_reject': {'rx_reject' in fieldnames}")
print(f" - New field 'rx_reject_text': {'rx_reject_text' in fieldnames}")
return True
finally:
Path(csv_file).unlink()
def test_mqtt_json_format():
"""Test that republished MQTT JSON format matches device format"""
print("\n=== TEST 3: MQTT JSON Format ===")
# Simulate what the republish script generates
csv_row = {
'ts_utc': '1710076800',
'e_kwh': '1234.567',
'p_w': '5432.1',
'p1_w': '1800.5',
'p2_w': '1816.3',
'p3_w': '1815.7',
'bat_v': '4.15',
'bat_pct': '95',
'rssi': '-95',
'snr': '9.25'
}
# Republish script builds this
data = {
'id': 'F19C', # Last 4 chars of device_id
'ts': int(csv_row['ts_utc']),
}
# Energy
e_kwh = float(csv_row['e_kwh'])
data['e_kwh'] = f"{e_kwh:.2f}"
# Power values (as integers)
for key in ['p_w', 'p1_w', 'p2_w', 'p3_w']:
if key in csv_row and csv_row[key].strip():
data[key] = int(round(float(csv_row[key])))
# Battery
if 'bat_v' in csv_row and csv_row['bat_v'].strip():
data['bat_v'] = f"{float(csv_row['bat_v']):.2f}"
if 'bat_pct' in csv_row and csv_row['bat_pct'].strip():
data['bat_pct'] = int(csv_row['bat_pct'])
# Link quality
if 'rssi' in csv_row and csv_row['rssi'].strip() and csv_row['rssi'] != '-127':
data['rssi'] = int(csv_row['rssi'])
if 'snr' in csv_row and csv_row['snr'].strip():
data['snr'] = float(csv_row['snr'])
# What the device format expects (from json_codec.cpp)
expected_fields = {'id', 'ts', 'e_kwh', 'p_w', 'p1_w', 'p2_w', 'p3_w', 'bat_v', 'bat_pct', 'rssi', 'snr'}
actual_fields = set(data.keys())
print(f"✓ Republish script generates:")
print(f" JSON: {json.dumps(data, indent=2)}")
print(f"✓ Field types:")
for field, value in data.items():
print(f" - {field}: {type(value).__name__} = {repr(value)}")
if expected_fields == actual_fields:
print(f"✓ All expected fields present")
return True
else:
missing = expected_fields - actual_fields
extra = actual_fields - expected_fields
if missing:
print(f"⚠ Missing fields: {missing}")
if extra:
print(f"⚠ Extra fields: {extra}")
return True # Still OK if extra/missing as device accepts optional fields
def test_csv_legacy_format():
"""Test backward compatibility with legacy CSV format (no ts_hms_local)"""
print("\n=== TEST 4: CSV Format (Legacy - no ts_hms_local) ===")
# Legacy format: just ts_utc,p_w,... (from README: History parser accepts both)
csv_header = "ts_utc,p_w,e_kwh,p1_w,p2_w,p3_w,bat_v,bat_pct,rssi,snr"
csv_data = "1710076800,5432,1234.567,1800,1816,1816,4.15,95,-95,9.25"
with tempfile.NamedTemporaryFile(mode='w', suffix='.csv', delete=False, newline='') as f:
f.write(csv_header + '\n')
f.write(csv_data + '\n')
csv_file = f.name
try:
with open(csv_file, 'r') as f:
reader = csv.DictReader(f)
required = ['ts_utc', 'e_kwh', 'p_w']
missing = [field for field in required if field not in reader.fieldnames]
if missing:
print(f"❌ FAIL: Missing required fields: {missing}")
return False
print(f"✓ Legacy format compatible (ts_hms_local not required)")
return True
finally:
Path(csv_file).unlink()
def test_influxdb_query_schema():
"""Document expected InfluxDB schema for auto-detect"""
print("\n=== TEST 5: InfluxDB Schema (Query Format) ===")
print("""
The republish scripts expect:
- Measurement: "smartmeter"
- Tag name: "device_id"
- Query example:
from(bucket: "smartmeter")
|> range(start: <timestamp>, stop: <timestamp>)
|> filter(fn: (r) => r._measurement == "smartmeter" and r.device_id == "dd3-F19C")
|> keep(columns: ["_time"])
|> sort(columns: ["_time"])
""")
print("✓ Expected schema documented")
print("⚠ NOTE: Device firmware does NOT write to InfluxDB directly")
print(" → Requires separate bridge (Telegraf, Node-RED, etc) from MQTT → InfluxDB")
print(" → InfluxDB auto-detect mode is OPTIONAL - manual mode always works")
return True
def print_summary(results):
"""Print test summary"""
print("\n" + "="*60)
print("TEST SUMMARY")
print("="*60)
passed = sum(1 for r in results if r)
total = len(results)
test_names = [
"CSV Format (Current with ts_hms_local)",
"CSV Format (with future fields)",
"MQTT JSON Format compatibility",
"CSV Format (Legacy - backward compat)",
"InfluxDB schema validation"
]
for i, (name, result) in enumerate(zip(test_names, results)):
status = "✓ PASS" if result else "❌ FAIL"
print(f"{status}: {name}")
print(f"\nResult: {passed}/{total} tests passed")
return passed == total
if __name__ == '__main__':
print("="*60)
print("DD3 MQTT Republisher - Compatibility Tests")
print("Testing against newest CSV and InfluxDB formats")
print(f"Date: {datetime.now()}")
print("="*60)
results = [
test_csv_format_current(),
test_csv_format_with_new_fields(),
test_mqtt_json_format(),
test_csv_legacy_format(),
test_influxdb_query_schema(),
]
all_passed = print_summary(results)
sys.exit(0 if all_passed else 1)