# Fix Beego ORM BinaryField Type Issues

You're building a Go application with Beego ORM that stores files in the database. After inserting binary data (images, PDFs, encrypted content), you retrieve it and find it's corrupted or empty.

``` // Insert 5KB image doc.Content = imageData // []byte, len = 5120 o.Insert(doc)

// Retrieve it o.Read(&doc) fmt.Println(len(doc.Content)) // Output: 0 - data is gone! ```

Or you get this error when running migrations:

bash
panic: field type not supported: binary

Real Scenario: Image Storage Corruption

A document management system stored PDF thumbnails in MySQL using Beego ORM. Users reported that some thumbnails appeared corrupted after retrieval - colors were wrong, images were partially visible, or completely blank.

The root cause: The model used string instead of []byte for the binary field:

go
// Wrong - causes data corruption
type Document struct {
    Id          int64
    Name        string
    Thumbnail   string `orm:"type(blob)"`  // Wrong type!
}

When binary data was stored, Go's string handling interpreted null bytes as string terminators, corrupting the data.

The fix:

go
// Correct - preserves binary data
type Document struct {
    Id          int64
    Name        string
    Thumbnail   []byte `orm:"type(blob)"`  // Correct type
}

Understanding BinaryField in Beego ORM

Beego ORM supports binary data types through the type() tag:

Beego TagMySQL TypePostgreSQL TypeMax Size
type(binary)BLOBBYTEA65,535 bytes
type(blob)BLOBBYTEA65,535 bytes
type(mediumblob)MEDIUMBLOBBYTEA16 MB
type(longblob)LONGBLOBBYTEA4 GB
type(tinyblob)TINYBLOBBYTEA255 bytes

Important: The Go field type MUST be []byte, not string.

Correct Model Definition

```go package models

import ( "time" "github.com/beego/beego/v2/client/orm" )

type Document struct { Id int64 orm:"pk;auto" Name string orm:"size(255)" ContentType string orm:"size(100)" Size int64 orm:"" // File size in bytes Content []byte orm:"type(longblob)" // Binary content Thumbnail []byte orm:"type(blob)" // Small thumbnail Hash string orm:"size(64)" // SHA-256 hash for integrity CreatedAt time.Time orm:"auto_now_add;type(datetime)" UpdatedAt time.Time orm:"auto_now;type(datetime)" }

func init() { orm.RegisterModel(new(Document)) }

// TableName returns the table name func (d *Document) TableName() string { return "documents" } ```

Complete Working Example

```go package main

import ( "crypto/sha256" "encoding/hex" "fmt" "log" "os"

"github.com/beego/beego/v2/client/orm" _ "github.com/go-sql-driver/mysql" )

type Document struct { Id int64 orm:"pk;auto" Name string orm:"size(255)" ContentType string orm:"size(100)" Size int64 orm:"" Content []byte orm:"type(longblob)" Hash string orm:"size(64)" }

func init() { // Register database orm.RegisterDataBase("default", "mysql", "user:password@tcp(127.0.0.1:3306)/testdb?charset=utf8mb4&parseTime=true") orm.RegisterModel(new(Document)) }

func main() { // Create tables err := orm.RunSyncdb("default", true, true) if err != nil { log.Fatalf("Failed to sync database: %v", err) }

o := orm.NewOrm()

// Read a file and store it content, err := os.ReadFile("example.pdf") if err != nil { log.Fatalf("Failed to read file: %v", err) }

// Calculate hash for integrity check hash := sha256.Sum256(content) hashStr := hex.EncodeToString(hash[:])

// Create document doc := &Document{ Name: "example.pdf", ContentType: "application/pdf", Size: int64(len(content)), Content: content, Hash: hashStr, }

// Insert id, err := o.Insert(doc) if err != nil { log.Fatalf("Failed to insert: %v", err) } fmt.Printf("Inserted document with ID %d, size %d bytes\n", id, doc.Size)

// Retrieve and verify retrieved := &Document{Id: id} err = o.Read(retrieved) if err != nil { log.Fatalf("Failed to read: %v", err) }

// Verify integrity retrievedHash := sha256.Sum256(retrieved.Content) retrievedHashStr := hex.EncodeToString(retrievedHash[:])

if retrievedHashStr == retrieved.Hash { fmt.Printf("✓ Integrity verified: %d bytes retrieved correctly\n", len(retrieved.Content)) } else { fmt.Printf("✗ Integrity check failed!\n") } } ```

Common Errors and Solutions

Error 1: Data Not Persisting

Symptom: After insert, the binary field is empty or nil.

Cause: Using string instead of []byte.

go // Wrong type File struct { Data string orm:"type(blob)"` }

// Correct type File struct { Data []byte orm:"type(blob)" } ```

Error 2: Data Corruption

Symptom: Retrieved binary data differs from original, null bytes are missing.

Cause: String encoding/decoding issues.

Test for corruption:

go
func testBinaryIntegrity(original []byte, retrieved []byte) bool {
    if len(original) != len(retrieved) {
        return false
    }
    for i := range original {
        if original[i] != retrieved[i] {
            return false
        }
    }
    return true
}

Error 3: Query with Binary Field

Symptom: Cannot query by binary field value.

Cause: Most databases don't support direct equality comparison on BLOB columns.

go
// This won't work reliably
var docs []Document
o.QueryTable("document").Filter("content", someBinaryData).All(&docs)

Solution: Use a hash column for lookups:

go type Document struct { Id int64 Content []byte orm:"type(blob)" ContentHash string orm:"size(64);index"` // SHA-256 hash }

func (d *Document) SetContent(content []byte) { d.Content = content hash := sha256.Sum256(content) d.ContentHash = hex.EncodeToString(hash[:]) }

// Query by hash func FindByContentHash(o orm.Ormer, hash string) (*Document, error) { doc := &Document{ContentHash: hash} err := o.Read(doc, "ContentHash") if err != nil { return nil, err } return doc, nil } ```

Error 4: Memory Issues with Large Files

Symptom: Out of memory when loading large files.

Cause: Loading entire file into memory.

Solution: Store file path instead of content:

go type FileRecord struct { Id int64 orm:"pk;auto" Name string orm:"size(255)" FilePath string orm:"size(512)" Size int64 orm:"" ContentType string orm:"size(100)" Hash string orm:"size(64)"` }

func SaveFile(o orm.Ormer, name string, content []byte) (*FileRecord, error) { // Generate unique filename hash := sha256.Sum256(content) hashStr := hex.EncodeToString(hash[:]) filePath := fmt.Sprintf("/uploads/%s_%s", hashStr[:8], name)

// Write to disk err := os.WriteFile(filePath, content, 0644) if err != nil { return nil, err }

// Save record to database record := &FileRecord{ Name: name, FilePath: filePath, Size: int64(len(content)), Hash: hashStr, } _, err = o.Insert(record) return record, err }

func LoadFile(o orm.Ormer, id int64) ([]byte, error) { record := &FileRecord{Id: id} err := o.Read(record) if err != nil { return nil, err } return os.ReadFile(record.FilePath) } ```

Error 5: PostgreSQL BYTEA Escaping

Symptom: Different behavior between MySQL and PostgreSQL.

Cause: PostgreSQL uses different binary encoding.

go // For PostgreSQL, use BYTEA type type Document struct { Id int64 Content []byte orm:"type(bytea)"` // PostgreSQL specific }

// Connection string for PostgreSQL orm.RegisterDataBase("default", "postgres", "user=postgres password=pass dbname=test sslmode=disable") ```

Performance Best Practices

1. Choose the Right Blob Size

go // For small files (< 64KB) Content []byte orm:"type(blob)"`

// For medium files (< 16MB) Content []byte orm:"type(mediumblob)"

// For large files (< 4GB) Content []byte orm:"type(longblob)" ```

2. Avoid Loading Binary Data Unnecessarily

```go // Wrong - loads all binary data var docs []Document o.QueryTable("document").All(&docs)

// Correct - only load metadata var docs []Document o.QueryTable("document").All(&docs, "Id", "Name", "Size", "ContentType", "CreatedAt")

// Load binary data only when needed func GetContent(o orm.Ormer, id int64) ([]byte, error) { doc := &Document{Id: id} err := o.Read(doc) if err != nil { return nil, err } return doc.Content, nil } ```

3. Use Streaming for Large Files

go
func StreamToFile(o orm.Ormer, id int64, writer io.Writer) error {
    doc := &Document{Id: id}
    err := o.Read(doc)
    if err != nil {
        return err
    }
    _, err = writer.Write(doc.Content)
    return err
}

4. Add Indexes on Metadata

go
type Document struct {
    Id          int64  `orm:"pk;auto"`
    Name        string `orm:"size(255);index"`        // Index for name searches
    Hash        string `orm:"size(64);unique"`        // Unique index for deduplication
    ContentType string `orm:"size(100);index"`        // Index for filtering by type
    Content     []byte `orm:"type(blob)"`
}

Migration from Other ORMs

From GORM

go // GORM type File struct { gorm.Model Data []byte gorm:"type:blob"` }

// Beego equivalent type File struct { Id int64 orm:"pk;auto" CreatedAt time.Time orm:"auto_now_add" UpdatedAt time.Time orm:"auto_now" Data []byte orm:"type(blob)" } ```

From XORM

go // XORM type File struct { Id int64 xorm:"pk autoincr" Data []byte xorm:"blob"` }

// Beego equivalent type File struct { Id int64 orm:"pk;auto" Data []byte orm:"type(blob)" } ```

Testing Binary Data Integrity

```go package main

import ( "testing" "github.com/beego/beego/v2/client/orm" )

func TestBinaryFieldIntegrity(t *testing.T) { o := orm.NewOrm()

// Test data with null bytes testData := []byte{ 0x00, 0x01, 0x02, 0x03, // Null bytes at start 0xFF, 0xFE, 0xFD, 0xFC, // High bytes 0x89, 0x50, 0x4E, 0x47, // PNG magic number }

doc := &Document{ Name: "test_binary", Content: testData, }

// Insert id, err := o.Insert(doc) if err != nil { t.Fatalf("Insert failed: %v", err) }

// Retrieve retrieved := &Document{Id: id} err = o.Read(retrieved) if err != nil { t.Fatalf("Read failed: %v", err) }

// Compare if len(retrieved.Content) != len(testData) { t.Errorf("Length mismatch: got %d, want %d", len(retrieved.Content), len(testData)) }

for i := range testData { if retrieved.Content[i] != testData[i] { t.Errorf("Byte mismatch at position %d: got 0x%02X, want 0x%02X", i, retrieved.Content[i], testData[i]) } } } ```

Checklist for BinaryField Issues

  1. 1.**Verify field type is []byte:**
  2. 2.```go
  3. 3.Content []byte orm:"type(blob)" // Correct
  4. 4.`
  5. 5.Check blob size matches expected data:
  6. 6.```go
  7. 7.// For files > 64KB, use mediumblob or longblob
  8. 8.Content []byte orm:"type(longblob)"
  9. 9.`
  10. 10.Add hash column for integrity:
  11. 11.```go
  12. 12.Hash string orm:"size(64)"
  13. 13.`
  14. 14.Test with actual binary data:
  15. 15.```go
  16. 16.// Test with null bytes, high bytes, etc.
  17. 17.testData := []byte{0x00, 0xFF, 0x89, 0x50}
  18. 18.`
  19. 19.Verify database column type:
  20. 20.```sql
  21. 21.SHOW CREATE TABLE documents;
  22. 22.-- Content should be blob, mediumblob, or longblob
  23. 23.`