Archive 2023

Lightweight ASCII English Text Stream Compression in Python.

NOTE : updated documentation and source code is available at :

https://github.com/rodv92/PLETSC

PLETSC

Lightweight english text stream compression, with word tokens, ngrams, session dictionaries and Huffmann for unknown words.

How to use :

git clone and decompress dics.zip in the current folder.

Syntax for compression :

python3 dicstrv.py -c txt_inputfile compressed_outputfile Reads txt_inputfile and writes compressed text stream to compressed_outputfile.

python3 dicstrv.py -c txt_inputfile Reads txt_input file and writes compressed output to stdout

Syntax for decompression :

python3 dicstrv.py -x compressed_inputfile txt_outputfile Reads compressed_inputfile and writes cleartext to txt_outputfile.

python3 dicstrv.py -x compressed_inputfile

Reads compressed_input file and writes cleartext output to stdout

Syntax to generate a compiled dictionary of ngrams :

python3 dicstrv.py -d cleartext_ngrams_inputfile compressed_ngrams

This is rarely used in normal operation.

NOTE: dictionary file count1_w.txt must be in the same directory as the script. outngrams.bin must be in the same directory as the script, if ngrams are used (secondpass=True)

Description :

This script is useful for ASCII English text stream compression. It’s pedantic (P in PLETSC stands for “pedantic”) because its final goal is to enforce a minima some English syntactic rules, such as whitespace after “,” but not before, Capitalization after a “.” etc… (but not grammar). Spell check will probably be recommended but should probably be done upstream (by another applicative layer), as it will ensure a better compression ratio – since it is based on words of the english dictionary.

Its compression method is primarily based on a token (words and punctuation) dictionary. It leverages frequency of modern english words:

  • Words of the primary dictionary are sorted from most used to least used.
  • The line number is used as an index. (+1) index 0 is reserved for whitespace.

It also uses adaptive length encoding (1-2-3 bytes) First 128 most used tokens are encoded on 1 byte, Next 16384 + 128 on 2 bytes. Next 2097152 + 16384 + 128 on 3 bytes.

The 3 byte address space is split in two :

  • First part (when byte0 msb is 1 and byte1 msb is 1 and byte2 msb is 0) is further divided into two subspaces.
    • The first subspace is for the remainder of the primary dictionary (it has 333 333 tokens).
    • And the second subspace holds an Ngram dictionary (more on that later).
  • Second part (when byte0 msb is 1 and byte1 msb is 1 and byte2 msb is 1) is further divided into two subspaces.
    • First part is for a session dictionary. A session dictionary is used to hold repeating unknown tokens. there are 2097152 – 5 codes available for this use. Initially empty. Kept in ram, it is a SESSION dictionary. This session dictionary should not be required to be sent between two parties, as it can be reconstructed entirely from the compressed stream.
    • Second part is only 5 codes, (TODO, for now just 1 code, and switch between Huffmann and no compression is done in a bool parameter) It is an escape sequence meaning that following bytes will be encoded wit the following methods :
      • first code : As a stream of chars (no compression), plus a C style termination (chr(0)).
      • second code : Huffmann encoding, lowercase only.
      • third code : Huffmann, lowercase + uppercase or uppercase only.
      • fourth code : Huffmann, lowercase + uppercase + numbers, or numbers only.
      • fifth code : All printable ASCII space, mainly for passwords. Each of these codes tells what Huffmann tree to use.

Performance :

It offers a good compression ratio (between 2.6 and 3.0+), That is, Sizes in % of ORIGINAL size of around 33% to 38%, mainly depending on the lexical complexity or lexical archaism of the source text, and presence of unkwnown or misspelled words.

A higher lexical complexity, or archaic texts, that is, if the input text uses less common words – based on current usage – (2023), will yield lower compression ratios.

The compresion ratio is more or less stable : it is quite independent of text length.

This is contrary to block algorithms that suffer from low compression for small files because of a significant overhead. For larger corpuses, block algorithms usually perform better, and modern methods may use ML methods to provide context and adaptive encoding based on that, they’re usually slower.

This is why this algorithm is intended for stream compression (on the fly). However, its current implementation is based on reading files. and outputting to a file or stdout.

Compression speed (all options enabled)

For this test :

  • File is call_of_cthulhu.txt, size uncompressed is 69 kB
  • Compression speed around 23,3 kB/s on a Intel(R) Core(TM) i5 CPU M 520 @ 2.40GHz (computer from 2011), + SSD storage

Footprint (filesystem)

zipped size of count1_w.txt + outngrams.bin is 11 566 806 bytes unzipped size is : 31 327 633 bytes + 3 157 445 bytes = 34 485 078 bytes.

Footprint (memory)

To be determined

Dependecies

These Python modules are required :

codecs, nltk, re, bitstring, bitarray, struct, time, dahuffman

Requirements

Input text file must be ASCII (for now) or UTF-8 decodable to ASCII (English). It ignores conversion errors. Decoded file will be encoded in ASCII. It should be in English to get adequate conversion.

Both ends (sender and receiver) MUST have the SAME dictionaries and the SAME Huffmann tables, as these are not sent with the data.

Information about the dictionaries

The primary dictionary is based on the “count_1w.txt” english dictionary of 333 333 words, (words ordered by lexical prevalence) tweaked with added special characters also listed by order of prevalence and added english contractions, and with word count number stripped off.

The original primary dictionary file is available on : https://norvig.com/ngrams/

It also features a secondary (optional) compression pass based on a compiled dictionary named outngrams.bin.

It features compression for 4 and 5 word ngrams found in the first compression step stream. Ngrams of less than 4 words are deemed not interesting as the first pass will usually encode them on 3 bytes, the same sized as a compressed ngram.

Compression and decompression require the primary dictionary to be available, and the secondary if the boolean SecondPass is set to true, (by default).

The zip “dics.zip” already have a compiled version of these dictionaries.

More information

The algorithm is heavily commented in the code.

Field of application

Main applications could be messaging over low bandwidth links like POCSAG radio text, or JS8Call for HAM radio, and IoT.

However, note that the underlying digital mode should allow binary transmission (not only transmission of printable ASCII characters) for seamless integration.

TODO and ISSUES :

See comments in the code.

Main issues for now are syntactic rules and spurious whitespaces, absence of whitespaces were they should have been, problems with hyphenated tokens, spurious newlines, problems with some possessive forms, and special constructs besides emails and well formed URLs.

Ngrams Processing from scratch :

Useful if you want to tweak or create your own dictionaries, we’ll discuss mainly the outngrams.bin dictionary, as count_1w.txt tweaking is straightforward. Note that count1_w.txt should not be modified once outngrams.bin is generated, or you’ll have to rebuild outngrams.bin

A preparatory step is required to generate a compressed version of the ngrams files, if you want to do it from scratch :

First create the ngrams CSV using this code repo : https://github.com/orgtre/google-books-ngram-frequency/tree/main/python

The repo contains scripts that perform the download and concatenation of ngrams according to criterions you specify. Note that LETSC has limited space in the first subspace of the 3 byte. more or less 2097152 – 333333 I have created an ngram list of 1571125 ngrams. The distribution between the 4grams and 5grams is roughly 50%/50%

The resulting CSV files need to be further processed by our algorithm

The script that create outngrams.bin (the secondary compiled dictionary based on the primary dictionary and the ngrams csv files from google-books-ngram) is called ngrams_format_dic.py This script is commented for what each line does.

# LIGHTWEIGHT ENGLISH TEXT STREAM COMPRESSION (LETSC)
# (adaptive encoding length 1byte/2byte/3byte based on word dictionary with statistical prevalence ordering - count1_w.txt)
# Huffmann encoding for uknown tokens
# Enforces English syntax rules for punctuation
# Takes into account possessives and contractions
# Has URLs and e-mails processing rules, more to follow
# Second pass compression using a dictionary of the most frequent 4 N-Grams of English fiction.

#GPL 3 License
# www.skynext.tech
# Rodrigo Verissimo
# v0.92
# October 21th, 2023


# Python + packages Requirements

# Python 3.9
# nltk, bitarray, bitstring, re, dahuffmann

# Performance : ratios between x2.6 for Middle to Modern and elaborate English (ex: Shakespeare)
# Up to x3 and more for simple english.
# adapted for text messaging / streaming
# Requires the same dictionary on both channel endpoints.

# ALGORITHM. Very straightforward. (adaptive encoding length based on dictionary with statistical ordering)

#################################################################################
# First byte :

#if MSB is 0, a whole word is encoded on the first 7 bits of one byte only.
#This makes 127 possible words. These are reserved for the first 127 most used 
#english words. punctuation also appears as a possible word

# Second byte :

#if MSB of first byte is 1, and MSB of second byte is 0, a whole word is encoded
# with the help of the 7 LSB of byte 1 plus the 7 LSB of byte 2. 
# This makes room for the next 16384 most used english words.

# Third byte :
# if MSB of first byte is 1 and MSB of second byte is 1, and the MSB of third byte is 0
# a whole word is encoded
# with the help of the 7 + 7 + 7 = 21 bits (2 097 152 possible words)

# For now, the 3 byte address space is split into two 2 097 152 address spaces
# That is, the case of all 3 bytes MSB being 1 is treated separately.
# In this adress space, only a handful of codes are used as an escape sequence for particular 
# Huffmann trees, see below.

#->
#load dictionary of english words from most used to least used.
#punctuation and special characters have been added with order of prevalence.
#punctuation frequency is from wikipedia corpus. (around 1.3 billion words) 
#it has been normalized to the frequency of the 1/3 million word list based 
#on the google web trillon word corpus. that is, frequencies for special chars have been multiplied by 788.39
#wikipedia punctuation is not optimized for chat, as it lower prevalence of chars like question marks
#that may appear more frequently in chat situations.

# the first tokenizer used does not separate any special character attached (without whitespace) to a word
# this will mostly result in an unknown word in the dictionary
# this key absence in the reverse dict will be catched and treated by another tokenizer (mainly for possessive
# forms and contractions)

#for possessives ex: "dog's bone" or plural "dogs' bones" a separate tokenizer is used to split into
# "dog" , "'s"
# "'s" and "'" also appear in the dictionary.

# ROADMAP
# remove whitespaces left of punctuation DONE
# manage new lines DONE
# manage websites and emails DONE
# TODO
# add spell check ! 
# TODO
# Remove spurious new lines that appear after encoding special sequences such mails or URLS
# DONE (basic Huffmann, some chars missing in tree)
# add Huffmann encoding for absent words in dictionary (neologisms,colloqualisms,dialects, or misspellings) DONE
# DONE

# TODO : test with more texts such as wikipedia XML and various authors works, to catch as much
# use cases and formatting issues that arise to improve the algorithm

# add adaptive Huffmann. use 4 Huffmann trees. (see below)
# Assuming there are 4 codes for hufmmann : hufmann lower case, hufmann lower + capitals, huffmann
# lower + capitals + numeric, all printable ASCII excluding whitespace : same as preceding category plus 
# special chars.
# Chosing the tree to use would be done by string regex.

#DONE
# Detect UTF-8 and transcode to ASCII (potentially lossy)
#DONE


# TODO
# Dictionary Learn over time (re-shuffle the order of tokens)
# Without transmission of any info between parties
# Dangerous if sync is lost between the two parties
# TODO

# TODO
# optimize Huffmann part to remove the need for the chr(0) termination = scan for EOF sequence in Huffmann to get
# the Huffmann byte sequence length. TODO


# DONE
# Add second pass compression using word N-grams lookup table. (4 and 5 N-grams seem to be a good compromize)
# The idea is to encode 4 and 5 token substrings in a line by a single 3 byte code.
# There is plenty of room left in the 3 byte address space. For now, there is 333 333 - 16384 - 128 tokens used = 316821 tokens used
# from 4194304 - 3 total address space.
# DONE using 1 571 125 codes for a 50/50 mix of 4grams and 5grams.
# There is still at least 2million codes left.
#  for now we plan 4 escape sequences for the selection of one of the 4 Huffmann trees.


# ngrams processing is first done with the create_ngrams_dic.sh script.
"""
python3 ngrams_format_dic.py 4grams_english-fiction.csv outngrams4.txt #remove counts and process contractions
python3 ngrams_format_dic.py 5grams_english-fiction.csv outngrams5.txt #remove counts and process contractions

python3 dicstrv4.py -d outngrams4.txt outngrams4.bin.dup #convert ngrams txt to compressed form
python3 dicstrv4.py -d outngrams5.txt outngrams5.bin.dup #convert ngrams txt to compressed form
awk '!seen[$0]++' outngrams4.bin.dup > outngrams4.bin #Remove spurious duplicates that may arise
awk '!seen[$0]++' outngrams5.bin.dup > outngrams5.bin #Remove spurious duplicates that may arise
sed -i '786001,$ d' outngrams4.bin # truncate to fit target address space
sed -i '786001,$ d' outngrams5.bin # truncate to fit target address space

cat outngrams4.bin outngrams5.bin > outngrams.bin # concatenate. this is our final form
cat outngrams.bin | awk '{ print length, bash $0 }' | sort -n -s | cut -d" " -f2- > sorted.txt # sort by size to have an idea of distribution

# ngrams that encode as less than 4 bytes have been pruned since the ratio is 1

"""

# DONE 
# It is probable that the most used 4 tokens N-grams are based on already frequent words. that individually
# encode as 1 byte or two bytes.
# Worst case : all the 4 tokens are encoded in the 1 to 128 addres space, so they take a total 4 bytes.
# The resulting code will be 3 bytes, a deflate percent of 25%
# If one of the tokens is 2 byte (128 to 16384 -1 address space), then it uses 5 bytes.
# deflate percent is 40%
# The unknown is the statistical prevalence of two million 4 token N-grams.
# (ex: coming from english fiction corpus) in a standard chat text.

# First encode the google most frequent 4 and 5 N-grams csv file to replace the tokens in each N-gram by the corrsponding 
# byte sequences from our codes in the count_1w.txt dictionary. This will be another pre-process script.
# The resulting new csv format will be :
# some 3 byte index = x04x09x23.
# The 3 byte index is simply the line number of the compressed ngram. 

# read that in ram. Conservative Estimate 4 bytes + 3 bytes per entry 7 bytes * 2 000 000 = 14 Meg memory footprint.
# We already have a 4 MB * 3  12 Meg footprint from count_1w (estimate)

# Generate the inverse map dictionary (mapping sequences to 3 byte indexes)
# x04x09x23' = some 3 byte index
# Should not be a problem since there is a 1 to 1 relationship between the two

# Then perform a first pass compression.
# Then scan the first pass compression file using a 4 token sliding window.
# Contractions is a case that will have to be managed.

# If there are overlapping matches, chose the match that result in the best deflation, if any.
# If the unknown word escape codes appears, stop processing and resume after the escaped word

# Overall, replace the byte sequence by the corrsponding 3 byte sequence.
# DONE



import sys
import traceback

#print(len(sys.argv))
#op = (sys.argv[1]).encode("ascii").decode("ascii")
#print(op)
#quit()

if ((len(sys.argv) < 3) or (len(sys.argv) > 4)):
    print("Syntax for compression :\n")
    print("python3 dicstrv.py -c <txt_inputfile> <compressed_outputfile>")
    print("Reads txt_inputfile and writes compressed text stream to compressed_outputfile.\n") 
    
    print("python3 dicstrv.py -c <txt_inputfile>")
    print("Reads txt_input file and writes compressed output to stdout\n")

    print("Syntax for decompression :\n")
    print("python3 dicstrv.py -x <compressed_inputfile> <txt_outputfile>")
    print("Reads compressed_inputfile and writes cleartext to txt_outputfile.\n") 
    
    print("python3 dicstrv.py -x <compressed_inputfile>\n")
    print("Reads compressed_input file and writes cleartext output to stdout\n")

    print("NOTE: dictionary file count1_w.txt must be in the same directory as the script.")    
    quit()

if (sys.argv[1] == "-c"):
    compress = True
    gendic = False
elif (sys.argv[1] == "-d"):
    compress = True
    gendic = True
elif (sys.argv[1] == "-x"):
    compress = False
    gendic = False
else:
    print("unknown operation: " + str(sys.argv[0]) + " type 'python3 dicstrv3.py' for help")

if (len(sys.argv) == 3):
    infile = sys.argv[2]
    outfile = ''
if (len(sys.argv) == 4):
    infile = sys.argv[2]
    outfile = sys.argv[3]

import codecs
import nltk
from nltk.tokenize import TweetTokenizer
tknzr = TweetTokenizer()

import re
import bitstring
from bitarray import bitarray
import struct
import time
from dahuffman import HuffmanCodec


debug_on = False
debug_ngrams_dic = False
secondpass = True
use_huffmann = False
unknown_token_idx = 16384 + 128 + 2097152


def debugw(strdebug):
    if (debug_on):
        print(strdebug)

# Huffmann is only used for absent words in count1_w.txt dictionary
# General lower and upper case frequency combined as lowercase



codec_lower = HuffmanCodec.from_frequencies(
{'e' :   56.88,	'm' :	15.36,
'a'	:	43.31,	'h'	:	15.31,
'r'	:	38.64,	'g'	:	12.59,
'i'	:	38.45,	'b'	:	10.56,
'o'	:	36.51,	'f'	:	9.24,
't'	:	35.43,	'y'	:	9.06,
'n'	:	33.92,	'w'	:	6.57,
's'	:	29.23,	'k'	:	5.61,
'l'	:	27.98,	'v'	:	5.13,
'c'	:	23.13,	'x'	:	1.48,
'u'	:	18.51,	'z'	:	1.39,
'd'	:	17.25,	'j'	:	1,
'p'	:	16.14,	'q'	:	1
}
)

debugw(codec_lower.get_code_table())

# following is ASCII mixed upper and lower case frequency from an English writer from Palm OS PDA memos in 2002
# Credit : http://fitaly.com/board/domper3/posts/136.html

codec_upperlower = HuffmanCodec.from_frequencies(

{'A' : 0.3132,
'B' : 0.2163,
'C' : 0.3906,
'D' : 0.3151,
'E' : 0.2673,
'F' : 0.1416,
'G' : 0.1876,
'H' : 0.2321,
'I' : 0.3211,
'J' : 0.1726,
'K' : 0.0687,
'L' : 0.1884,
'M' : 0.3529,
'N' : 0.2085,
'O' : 0.1842,
'P' : 0.2614,
'Q' : 0.0316,
'R' : 0.2519,
'S' : 0.4003,
'T' : 0.3322,
'U' : 0.0814,
'V' : 0.0892,
'W' : 0.2527,
'X' : 0.0343,
'Y' : 0.0304,
'Z' : 0.0076,
'a' : 5.1880,
'b' : 1.0195,
'c' : 2.1129,
'd' : 2.5071,
'e' : 8.5771,
'f' : 1.3725,
'g' : 1.5597,
'h' : 2.7444,
'i' : 4.9019,
'j' : 0.0867,
'k' : 0.6753,
'l' : 3.1750,
'm' : 1.6437,
'n' : 4.9701,
'o' : 5.7701,
'p' : 1.5482,
'q' : 0.0747,
'r' : 4.2586,
's' : 4.3686,
't' : 6.3700,
'u' : 2.0999,
'v' : 0.8462,
'w' : 1.3034,
'x' : 0.1950,
'y' : 1.1330,
'z' : 0.0596
})

debugw(codec_upperlower.get_code_table())

# following is ASCII alpha numeric frequency from an English writer from Palm OS PDA memos in 2002
# Credit : http://fitaly.com/board/domper3/posts/136.html

codec_alphanumeric = HuffmanCodec.from_frequencies(

{'0' : 0.5516,
'1' : 0.4594,
'2' : 0.3322,
'3' : 0.1847,
'4' : 0.1348,
'5' : 0.1663,
'6' : 0.1153,
'7' : 0.1030,
'8' : 0.1054,
'9' : 0.1024,
'A' : 0.3132,
'B' : 0.2163,
'C' : 0.3906,
'D' : 0.3151,
'E' : 0.2673,
'F' : 0.1416,
'G' : 0.1876,
'H' : 0.2321,
'I' : 0.3211,
'J' : 0.1726,
'K' : 0.0687,
'L' : 0.1884,
'M' : 0.3529,
'N' : 0.2085,
'O' : 0.1842,
'P' : 0.2614,
'Q' : 0.0316,
'R' : 0.2519,
'S' : 0.4003,
'T' : 0.3322,
'U' : 0.0814,
'V' : 0.0892,
'W' : 0.2527,
'X' : 0.0343,
'Y' : 0.0304,
'Z' : 0.0076,
'a' : 5.1880,
'b' : 1.0195,
'c' : 2.1129,
'd' : 2.5071,
'e' : 8.5771,
'f' : 1.3725,
'g' : 1.5597,
'h' : 2.7444,
'i' : 4.9019,
'j' : 0.0867,
'k' : 0.6753,
'l' : 3.1750,
'm' : 1.6437,
'n' : 4.9701,
'o' : 5.7701,
'p' : 1.5482,
'q' : 0.0747,
'r' : 4.2586,
's' : 4.3686,
't' : 6.3700,
'u' : 2.0999,
'v' : 0.8462,
'w' : 1.3034,
'x' : 0.1950,
'y' : 1.1330,
'z' : 0.0596
})

debugw(codec_alphanumeric.get_code_table())

# following is Whole ASCII printable chars frequency except whitespace from an English writer from Palm OS PDA memos in 2002
# Credit : http://fitaly.com/board/domper3/posts/136.html

codec_all = HuffmanCodec.from_frequencies(

{'!' : 0.0072,
'\"' : 0.2442,
'#' : 0.0179,
'$' : 0.0561,
'%' : 0.0160,
'&' : 0.0226,
'\'' : 0.2447,
'(' : 0.2178,
')' : 0.2233,
'*' : 0.0628,
'+' : 0.0215,
',' : 0.7384,
'-' : 1.3734,
'.' : 1.5124,
'/' : 0.1549,
'0' : 0.5516,
'1' : 0.4594,
'2' : 0.3322,
'3' : 0.1847,
'4' : 0.1348,
'5' : 0.1663,
'6' : 0.1153,
'7' : 0.1030,
'8' : 0.1054,
'9' : 0.1024,
':' : 0.4354,
';' : 0.1214,
'<' : 0.1225,
'=' : 0.0227,
'>' : 0.1242,
'?' : 0.1474,
'@' : 0.0073,
'A' : 0.3132,
'B' : 0.2163,
'C' : 0.3906,
'D' : 0.3151,
'E' : 0.2673,
'F' : 0.1416,
'G' : 0.1876,
'H' : 0.2321,
'I' : 0.3211,
'J' : 0.1726,
'K' : 0.0687,
'L' : 0.1884,
'M' : 0.3529,
'N' : 0.2085,
'O' : 0.1842,
'P' : 0.2614,
'Q' : 0.0316,
'R' : 0.2519,
'S' : 0.4003,
'T' : 0.3322,
'U' : 0.0814,
'V' : 0.0892,
'W' : 0.2527,
'X' : 0.0343,
'Y' : 0.0304,
'Z' : 0.0076,
'[' : 0.0086,
'\\' : 0.0016,
']' : 0.0088,
'^' : 0.0003,
'_' : 0.1159,
'`' : 0.0009,
'a' : 5.1880,
'b' : 1.0195,
'c' : 2.1129,
'd' : 2.5071,
'e' : 8.5771,
'f' : 1.3725,
'g' : 1.5597,
'h' : 2.7444,
'i' : 4.9019,
'j' : 0.0867,
'k' : 0.6753,
'l' : 3.1750,
'm' : 1.6437,
'n' : 4.9701,
'o' : 5.7701,
'p' : 1.5482,
'q' : 0.0747,
'r' : 4.2586,
's' : 4.3686,
't' : 6.3700,
'u' : 2.0999,
'v' : 0.8462,
'w' : 1.3034,
'x' : 0.1950,
'y' : 1.1330,
'z' : 0.0596,
'{' : 0.0026,
'|' : 0.0007,
'}' : 0.0026,
'~' : 0.0003,
})

debugw(codec_all.get_code_table())
#quit()        

def check_file_is_utf8(filename):
    debugw("checking encoding of:")
    debugw(filename)
    try:
        f = codecs.open(filename, encoding='utf-8', errors='strict')
        for line in f:
            pass
        debugw("Valid utf-8")
        return True
    except UnicodeDecodeError:
        debugw("invalid utf-8")
        return False

def find_huffmann_to_use(token):

    if(not use_huffmann):
        debugw("do not use Huffmann, encode char by char")
        return 0
    
    not_alllower = re.search("[^a-z]")
    
    if(not not_alllower):
        debugw("all lower case")
        return 1
    
    not_alllowerorupper = re.search("[^A-Za-z]")
    
    if(not not_alllowerorupper):
        debugw("all lower or upper")
        return 2
    
    not_alllalphanumeric = re.search("[^A-Za-z0-9]")
    
    if(not not_alllalphanumeric):
        debugw("all alpha numeric")
        return 3
    else:
        debugw("all printable, except whitespace")
        return 4
    
def encode_unknown(token,treecode):

    if (treecode == 0):
        bytes_unknown = bytearray()
        for charidx in range(0, len(token)):
            debugw("appending chars..")
            debugw(token[charidx])

            # only append if it is not an unexpected termination in the unknown token
            if (not ord(token[charidx]) == 0):
                bytes_unknown.append(ord(token[charidx]))
            else:
                debugw("unexpected termination chr(0) in unknown token, discarding character")


        return bytes_unknown
    if (treecode == 1):
        return codec_lower.encode(token)
    if (treecode == 2):
        return codec_upperlower.encode(token)           
    if (treecode == 3):
        return codec_alphanumeric.encode(token)                      
    if (treecode == 4):
        return codec_all.encode(token)                      

def decode_unknown(bytetoken,treecode):

    if (treecode == 1):
        return codec_lower.decode(bytetoken)
    if (treecode == 2):
        return codec_upperlower.decode(bytetoken)           
    if (treecode == 3):
        return codec_alphanumeric.decode(bytetoken)                      
    if (treecode == 4):
        return codec_all.decode(bytetoken)  

def compress_token_or_subtoken(compressed,line_token,token_of_line_count,lentoken,gendic):
  
    
    global unknown_token_idx

    try:

        # is the token in english dictionary ?
        debugw("line_token:" + line_token)
        tokenid = engdictrev[line_token]
        subtokensid = [tokenid]

        
    except:
        debugw("unknown word, special chars adjunct, or possessive form")
        # let's try to split the unknown word from possible adjunct special chars
        # for this we use another tokenizer
        subtokens = nltk.word_tokenize(line_token)
        if (len(subtokens) == 1):
            # no luck...
            # TODO : do not drop the word silently, encode it !
            # If we encode a ngram dic, skip ngrams with unknown tokens in the primary dic.
            # and return empty bytearray to signify ngram compression failure 
            if(gendic):
                compressed = bytearray()
                debugw("gendic : unknown word")
                return (compressed, token_of_line_count)
        
            debugw("unknown word")

            #AMEND dictionary 
            # add this unknown subtoken to a session dic so it can be recalled.
            debugw("unknown word: " + subtokens[0] + " adding to session dic at id: " + str(unknown_token_idx))
            debugw("unknown word, adding to session dic at id: " + str(unknown_token_idx))
            
            engdictrev[subtokens[0]] = unknown_token_idx
            engdict[unknown_token_idx] = subtokens[0]
            unknown_token_idx += 1
                       

            #subtokensid = [4194304 - 1] # subtoken code for unknown word escape sequence.                       
            subtokensid = [4194303 - find_huffmann_to_use(subtokens[0])]                   
            #print(subtokensid)
            #continue
        else:
            debugw("possible special char found")
            subtokensid = []
            for subtoken in subtokens:
                debugw("subtoken=")
                debugw(subtoken)
                try:
                    subtokensid.append(engdictrev[subtoken])
                except:
                    # no luck...
                    # TODO : do not drop the word silently, encode it !
        
                    # If we encode a ngram dic, skip ngrams with unknown tokens in the primary dic.
                    # and return empty bytearray to signify ngram compression failure 
                    if(gendic):
                        compressed = bytearray()
                        debugw("gendic : unknown word")
                        return (compressed, token_of_line_count)
        
                    debugw("unknown subtoken")
                    subtokensid.append(4194303 - find_huffmann_to_use(subtoken))
                    #subtokensid.append(4194304 - 1)
                    
                    # add this unknown subtoken to a session dic so it can be recalled.
                    #AMEND dictionary 
                    # add this unknown subtoken to a session dic so it can be recalled.
                    debugw("unknown subtoken: " + subtoken + " adding to session dic at id: " + str(unknown_token_idx))
                    debugw("unknown subtoken, adding to session dic at id: " + str(unknown_token_idx))
                    engdictrev[subtoken] = unknown_token_idx
                    engdict[unknown_token_idx] = subtoken
                    unknown_token_idx += 1
                    #continue
    subtokenidx = 0
    for subtokenid in subtokensid:        
        
        debugw("subtokenid=")
        debugw(subtokenid)
        # maximum level of token unpacking is done
        if(subtokenid < 128):

            debugw("super common word")
            debugw(engdict[subtokenid])

            #convert to bytes
            byte0 = subtokenid.to_bytes(1, byteorder='little')
            debugw("hex:")
            debugw(byte0.hex())

            #append to bytearray
            compressed.append(byte0[0])

        if(128 <= subtokenid < 16384 + 128):

            debugw("common word")

            #remove offset
            debugw(engdict[subtokenid])
            subtokenid -= 128
            
            #convert to bytes1 (array of 2 bytes)
            bytes1 = subtokenid.to_bytes(2,byteorder='little')
            debugw("".join([f"\\x{byte:02x}" for byte in bytes1]))
        
            #convert to bitarray
            c = bitarray(endian='little')
            c.frombytes(bytes1)
            debugw(c)
            
            # set msb of first byte to 1 and shift the more significant bits up.
            c.insert(7,1)
            debugw(c)
            
            # remove excess bit
            del c[16:17:1]
            debugw(c)
            
            # append our two tweaked bytes to the compressed bytearray
            compressed.append((c.tobytes())[0])
            compressed.append((c.tobytes())[1])

        #if(16384 +128 <= subtokenid < 4194304 - 1):
        if(16384 +128 <= subtokenid < 2097152 + 16384 + 128):


            debugw("rare word")
            
            # remove offset
            debugw(engdict[subtokenid])
            subtokenid -= (16384 + 128)

            #convert to bytes1 (array of 3 bytes)
            bytes2 = subtokenid.to_bytes(3,byteorder='little')
            debugw("".join([f"\\x{byte:02x}" for byte in bytes2]))

            #convert to bitarray
            c = bitarray(endian='little')
            c.frombytes(bytes2)
            debugw(c)
            
            # set msb of first byte to 1 and shift the bits above up.
            c.insert(7,1)
            debugw(c)

            # set msb of second byte to 1 and shift the bits above up.
            c.insert(15,1)
            debugw(c)

            # remove two excess bits that arose from our shifts
            del c[24:26:1]
            debugw(c)
            
            # append our three tweaked bytes to the compressed bytearray
            compressed.append((c.tobytes())[0])
            compressed.append((c.tobytes())[1])
            compressed.append((c.tobytes())[2])


                #if(16384 +128 <= subtokenid < 4194304 - 1):
        if(16384 +128 + 2097152 <= subtokenid < 4194304 - 5):


            debugw("unknown word from session DIC")
            
            # remove offset
            debugw(engdict[subtokenid])
            subtokenid -= (2097152 + 16384 + 128)

            #convert to bytes1 (array of 3 bytes)
            bytes2 = subtokenid.to_bytes(3,byteorder='little')
            debugw("".join([f"\\x{byte:02x}" for byte in bytes2]))

            #convert to bitarray
            c = bitarray(endian='little')
            c.frombytes(bytes2)
            debugw(c)
            
            # set msb of first byte to 1 and shift the bits above up.
            c.insert(7,1)
            debugw(c)

            # set msb of second byte to 1 and shift the bits above up.
            c.insert(15,1)
            debugw(c)

            # set msb of third byte to 1 and shift the bits above up.
            c.insert(23,1)
            debugw(c)


            # remove three excess bits that arose from our shifts
            del c[24:27:1]
            debugw(c)
            
            # append our three tweaked bytes to the compressed bytearray
            compressed.append((c.tobytes())[0])
            compressed.append((c.tobytes())[1])
            compressed.append((c.tobytes())[2])


        #if(subtokenid == (4194304 - 1)):
        if(subtokenid in range(4194299,4194304)):

            #compressed.append(255)
            #compressed.append(255)
            #compressed.append(255)
            debugw("huffmann tree code :" + str(subtokenid))

            # TODO : Use Huffmann tree instead of byte->byte encoding.
            
            #convert to bytes1 (array of 3 bytes)
            bytes2 = subtokenid.to_bytes(3,byteorder='little')
            debugw("".join([f"\\x{byte:02x}" for byte in bytes2]))

            #convert to bitarray
            c = bitarray(endian='little')
            c.frombytes(bytes2)
            debugw(c)
            
            # set msb of first byte to 1 and shift the bits above up.
            c.insert(7,1)
            debugw(c)

            # set msb of second byte to 1 and shift the bits above up.
            c.insert(15,1)
            debugw(c)

            # no need to set  msb of third byte to 1 since the range will take care of it.
            #c.insert(23,1)
            #debugw(c)

            # remove two excess bits that arose from our shifts
            del c[24:26:1]
            debugw(c)
            
            # append our three tweaked bytes that signify the huffmann tree to use to the compressed bytearray
            compressed.append((c.tobytes())[0])
            compressed.append((c.tobytes())[1])
            compressed.append((c.tobytes())[2])

            if (len(subtokens) == 1):
                if(not use_huffmann):
                    debugw("encoding unkown word")
                    #for charidx in range(0, len(line_token)):
                    #    debugw("appending chars..")
                    #    debugw(line_token[charidx])
                    #    compressed.append(ord(line_token[charidx]))
                    compressed.extend(encode_unknown(line_token,0))
                else:
                    debugw("encoding unkown line token with Huffmann")
                    huffmann_tree_code = -(subtokenid - 4194303)
                    compressed.extend(encode_unknown(line_token,huffmann_tree_code))
            else:
                if(not use_huffmann):
                    debugw("encoding unkown subtoken")
                    #for charidx in range(0, len(subtokens[subtokenidx])):
                    #    debugw("appending chars..")
                    #    debugw((subtokens[subtokenidx])[charidx])
                    #    compressed.append(ord((subtokens[subtokenidx])[charidx]))
                    compressed.extend(encode_unknown(subtokens[subtokenidx],0))
                else:
                    debugw("encoding unkown subtoken with Huffmann")
                    debugw(subtokens[subtokenidx])
                    #huffmann_tree_code = find_huffmann_to_use(subtokens[subtokenidx])
                    huffmann_tree_code = -(subtokenid - 4194303)
                    compressed.extend(encode_unknown(subtokens[subtokenidx],huffmann_tree_code))
            compressed.append(0) # terminate c string style
        subtokenidx += 1        
    token_of_line_count += 1

    debugw("token of line count")
    debugw(token_of_line_count)
    debugw("lentoken")
    debugw(lentoken)

    if((token_of_line_count == lentoken) and (not gendic)):
        # newline
        debugw("append new line")
        compressed.append(0)
        #quit()  

    return (compressed,token_of_line_count)


def compress_tokens(tokens,gendic):

    #time.sleep(0.001)    
    # Init byte array
    compressed = bytearray()
    
    debugw("tokens are:")
    debugw(tokens)

    for token in tokens:

        debugw("token is:")
        debugw(token)

        token_of_line_count = 0
        # start compression run
        if(not len(token) and (not gendic)):
            debugw("paragraph")
            compressed.append(0)
            #compressed.append(0)
            #quit()
        lentoken = len(token)
        if (not gendic):
            for line_token in token:           
                (compressed, token_of_line_count) = compress_token_or_subtoken(compressed,line_token,token_of_line_count,lentoken,gendic)
        else:
                (compressed, token_of_line_count) = compress_token_or_subtoken(compressed,token,token_of_line_count,lentoken,gendic)           
                if(not len(compressed)):
                    debugw("unknown word in gendic sequence, aborting")
                    compressed = bytearray()
                    return compressed
    # dump whole compressed stream
    debugw("compressed ngram is=")
    debugw(compressed.hex())
    debugw("compressed ngram byte length is=")
    debugw(len(compressed))

    return compressed

def compress_second_pass(compressed):

    ngram_compressed = bytearray()
    ngram_length = 0
    ngram_byte_length = 0
    index_jumps = []
    candidates = []
    idx = 0
    # second pass main loop
    #debugw("compressed=")
    #debugw(compressed)
    while (idx < len(compressed)):

        debugw("second pass idx=")
        debugw(idx)
        idxchar = 0
        reset_ngram = False
        debugw("indexjumps=")
        debugw(index_jumps)


        if(not (compressed[idx] & 128)):
            ngram_compressed.append(compressed[idx])
            debugw("".join([f"\\x{byte:02x}" for byte in ngram_compressed]))
            debugw("super common ext")
            idx += 1
            index_jumps.append(1)
            ngram_byte_length += 1
        elif((compressed[idx] & 128) and (not (compressed[idx+1] & 128))):
            ngram_compressed.extend(compressed[idx:idx+2])
            debugw("".join([f"\\x{byte:02x}" for byte in ngram_compressed]))
            debugw("common ext")
            idx += 2
            index_jumps.append(2)
            ngram_byte_length += 2
        elif((compressed[idx] & 128) and (compressed[idx+1] & 128) and (not compressed[idx+2] & 128)):
            ngram_compressed.extend(compressed[idx:idx+3]) 
            debugw("".join([f"\\x{byte:02x}" for byte in ngram_compressed]))
            debugw("rare ext")
            idx += 3  
            index_jumps.append(3)
            ngram_byte_length += 3     
        elif((compressed[idx] == 255) and (compressed[idx+1] == 255) and (compressed[idx+2] == 255)):
            # TODO : take into account 4 escape sequences instead of only one.
            #reset ngram_compressed
            char = compressed[idx+3]
            debugw("unknown token sequence detected")
            #print(char)
            #str = ""
            idxchar = 0
            while(char != 0):
                   idxchar += 1
                   char = compressed[idx+3+idxchar]
                   debugw("char=")
                   debugw(char)
            debugw("end of unknown token sequence detected at idx:")
            idx += (3 + idxchar)
            debugw(idx)
            index_jumps.append(3 + idxchar)
            ngram_length -= 1
            reset_ngram = True
         
        elif((compressed[idx] & 128) and (compressed[idx+1] & 128) and (compressed[idx+2] & 128)):
            # Session DIC space, breaks ngram construction.
            debugw("session DIC space, we break ngram construction")
            idx += 3
            index_jumps.append(3)
            ngram_length -= 1
            reset_ngram = True
    

        ngram_length += 1
        debugw("indexjumps=")
        debugw(index_jumps)
        debugw("ngram_length")
        debugw(ngram_length)

        if (((ngram_length == 3) and (ngram_byte_length > 3)) or (ngram_length == 4)):
            # if there are contractions, apparent ngram length will be one token less and potentially present in N4 ngrams
            # try to replace the ngram if it exists, and only if ngram_byte_length is > 3, otherwise there will be no compression gain.
            # save index jumps for rewind operations.
            # TO BE CONTINUED .....
            try: 
                
                ngram_compressed_no_ascii = "".join([f"\\x{byte:02x}" for byte in ngram_compressed])
                ngram_compressed_no_ascii = ngram_compressed_no_ascii.replace("\\","")
                debugw(ngram_compressed_no_ascii)
                code = ngram_dict[ngram_compressed_no_ascii]
                debugw("****FOUND*****")
                ratio = ngram_byte_length/3 # all ngrams are encoded in a 3 byte address space, hence div by 3
                removebytes = ngram_byte_length
                if(idxchar):
                    insertpos = idx - ngram_byte_length - (3 + idxchar)
                else:
                    insertpos = idx - ngram_byte_length                
                candidates.append((code,insertpos,removebytes,ratio))
            except:
                #traceback.print_exc()
                debugw("no luck 3N/4N")

            # reset all ngram data
            ngram_length = 0
            ngram_byte_length = 0
            ratio = 0
            removebytes = 0
            ngram_compressed = bytearray()

            #rewind...and retry a new ngram window from initial token index + one token shift
            #BUG HERE !!
            debugw("indexjumps=")
            debugw(index_jumps)
            #time.sleep(0.1)
            debugw("lastindexjumps_except_first=")
            debugw(index_jumps[-len(index_jumps)+1:])
            debugw("index_before_rewind=")
            debugw(idx)

            idx -= sum(index_jumps[-len(index_jumps)+1:])
            index_jumps = []
            debugw("idx after rewind=")
            debugw(idx)

        elif (reset_ngram):
            debugw("ngram reset : unknown token starts before ngram_length 3 or 4")
            ngram_length = 0
            ngram_byte_length = 0
            ratio = 0
            removebytes = 0
            #do not rewind : reset pos after unknown sequence
            index_jumps = []

    return candidates        


def process_candidates_v2(candidates):

    #here we scan all candidates.
    #if there are overlaps, we select the candidate with the best ratio, if any.
    #The result is a reduced list of candidates data.

    #Next we recreate the compressed stream and replace the bytes at insertpos by the candidate code
    debugw(candidates)
    candidates_reduced = []
    idx_reduced = 0
    idx = 0
    deleted_candidates_number = 0

    mutual_overlaps = []
    overlap_idx = 0

    while(idx < len(candidates)):
        
        code = candidates[idx][0]
        insertpos = candidates[idx][1]
        removebytes = candidates[idx][2]
        ratio = candidates[idx][3]

        first_overlap = True
        
        for idx_lookahead in range(idx+1,len(candidates)):
            
            code_lookahead = candidates[idx_lookahead][0]
            insertpos_lookahead = candidates[idx_lookahead][1]
            removebytes_lookahead = candidates[idx_lookahead][2]
            ratio_lookahead = candidates[idx_lookahead][3]

            if((insertpos + removebytes - 1) >= insertpos_lookahead):
                
                debugw("overlap!")
                debugw(code)
                debugw(code_lookahead)
                
                #add mutually overlapping indexes to an array
                if(first_overlap):
                    mutual_overlaps.append([idx])
                    mutual_overlaps[overlap_idx].append(idx_lookahead)
                    first_overlap = False

                else:
                    # case for a mutual overlap of at least 3 ngrams
                    debugw("len mutual overlap:")
                    debugw(len(mutual_overlaps))
                    debugw("overlap_idx")
                    debugw(overlap_idx)
                    mutual_overlaps[overlap_idx].append(idx_lookahead)
                 
                    overlap_idx += 1
                
            else:
                #end of mutual overlap (current lookahead is not overlapping with original idx)
                break
        idx += 1        
    #keep best ratio from all overlap lists
    keep_idxs = []
    remove_idx_shift = 0
        
    for overlap in mutual_overlaps:

        prev_candidate_ratio = 0
        
        for candidate_idx in overlap:

            debugw("candidate_idx:")
            debugw(candidate_idx)
            candidate_ratio = candidates[candidate_idx - remove_idx_shift][3]
            if (candidate_ratio >= prev_candidate_ratio):
                keep_idx = candidate_idx
                prev_candidate_ratio = candidate_ratio

        keep_idxs.append(keep_idx)

        

        for candidate_idx in overlap:
            if(candidate_idx != keep_idx):
                debugw("candidate len:")
                debugw(len(candidates))
                
                debugw("will delete idx:")
                debugw(str(candidate_idx - remove_idx_shift))
                
                del candidates[candidate_idx - remove_idx_shift]
                deleted_candidates_number += 1
                debugw("deleted idx:")
                debugw(str(candidate_idx - remove_idx_shift))
                remove_idx_shift += 1
                #keep the best ratio only from the list of mutual overlaps

    if (deleted_candidates_number > 0):
        debugw("recursive")
        deleted_candidates_number = 0
        process_candidates_v2(candidates)

    #need to exit recursion when len candidates stops decreasing

    return candidates

def ngram_insert_reserved_bits(ngram_compressed):
            
    debugw("".join([f"\\x{byte:02x}" for byte in ngram_compressed]))

    #convert to bitarray
    c = bitarray(endian='little')
    c.frombytes(ngram_compressed)
    debugw(c)
    
    # set msb of first byte to 1 and shift the bits above up.
    c.insert(7,1)
    debugw(c)

    # set msb of second byte to 1 and shift the bits above up.
    c.insert(15,1)
    debugw(c)

    # remove two excess bits that arose from our shifts
    del c[24:26:1]
    debugw(c)
    
    # replace the original ngram_compressed bytearray with our tweaked bytes
    ngram_compressed = bytearray()
    ngram_compressed.append((c.tobytes())[0])
    ngram_compressed.append((c.tobytes())[1])
    ngram_compressed.append((c.tobytes())[2])

    return ngram_compressed
                

def replace_candidates_in_processed(candidates,processed):

    byteshift = 0
    shiftcode = 0
    for candidate in candidates:
            insertpos = candidate[1] - byteshift
            removebytes = candidate[2]
            del processed[insertpos:insertpos + removebytes]
            byteshift += removebytes
            ## first we need to convert candidate code to proper 3 byte format
            # we add our 4 ngram code space at a 2^20 shift in the 3 bytes address space. 
            shifted_code = 524416 + candidate[0]
            # now we convert our shifted ngram code to a byte sequence in the compressed format
            bytes_shiftedcode = shifted_code.to_bytes(3, byteorder='little')
            # print it
            debugw(bytes_shiftedcode)
            # tweak the bytes to insert reserved bits for 1/2/3 bytes variable length encoding
            # compliance.
            bytes_shiftedcode = ngram_insert_reserved_bits(bytes_shiftedcode)
            # print it
            debugw(bytes_shiftedcode)
            # now we insert it at the position of the non-compressed ngram
            processed[insertpos:insertpos] = bytes_shiftedcode
            # we added 3 bytes, we have to compensate to keep future insertpos valid.
            byteshift -= 3

    return processed


def ngram_process_rules(subtokens):

    ### VARIOUS DETOKENIZER CLEANUP/FORMATTING OPERATIONS
    processed_ngram_string = ""
    capitalize = False
    token_idx = 0
    for token in subtokens:

        if(capitalize):
            token = token.capitalize()
            capitalize = False

        # English syntactic rules : remove whitespace left of "!?." 
        # and enforce capitalization on first non whitespace character following.
        if (re.match("[!\?\.]",token)):
            processed_ngram_string += token
            capitalize = True

        # English syntactic rules : remove whitespace left of ",;:" 
        elif (re.match("[,;:]",token)):         
            processed_ngram_string += token
            capitalize = False

        # append whitespace left of added token
        else:
            processed_ngram_string = processed_ngram_string + " " + token

        token_idx += 1
        
        if(len(subtokens) == token_idx):
            debugw("last token of ngram")
            processed_ngram_string += " "

    return processed_ngram_string

def decompress_ngram_bytes(compressed):

    idx = 0
    detokenizer_ngram = []
    
    while(idx < len(compressed)):
    
        if(not (compressed[idx] & 128)):
            
            # current index byte msb is at 0, 
            # it is one of the 128 first tokens in the dictionary.
            debugw("super common word")
            #decode in place
            
            inta = compressed[idx]        
            detokenizer_ngram.append(engdict[inta])
            idx += 1

        elif((compressed[idx] & 128) and (not (compressed[idx+1] & 128))):

            # current index byte msb is at 1, and next byte msb is at 0. 
            # it is one of the 16384 next tokens in the dictionary.
            debugw("common word")

            # populate bitarray from the two bytes
            c = bitarray(endian='little')
            c.frombytes(compressed[idx:idx+2])
            debugw(c)

            # remove first byte msb (shift down the bits above)
            del c[7]
            debugw(c)

            # convert bytes array to 16 bit unsigned integer
            inta = (struct.unpack("<H", c.tobytes()))[0]
            # add offset back so we get a valid dictionary key
            inta += 128

            # print word
            detokenizer_ngram.append(engdict[inta])
            # increment byte counter with step 2, we processed 2 bytes.
            idx += 2

        #elif((compressed[idx] & 128) and (compressed[idx+1] & 128)):
        elif((compressed[idx] & 128) and (compressed[idx+1] & 128) and (not compressed[idx+2] & 128)):
            
            # current index byte msb is at 1, and next byte mbs is at 1. 
            # it is one of the 4194304 next tokens in the dictionary.
            debugw("rare word")
            
            chunk = compressed[idx:idx+3]

            # populate bitarray from the three bytes
            c = bitarray(endian='little')
            #c.frombytes(compressed[idx:idx+3])
            c.frombytes(chunk)
            
            debugw(c)

            # remove second byte msb (shift down the bits above)
            del c[15]
            debugw(c)

            # remove first byte msb (shift down the bits above)
            del c[7]
            debugw(c)

            c.extend("0000000000") 
            # pad to 4 bytes (32 bit integer format) : 3 bytes + 10 bits 
            # because we previously removed two bits with del c[15] and del c[7]
            debugw(c)

            # convert bytes array to 32 bit unsigned integer
            inta = (struct.unpack("<L", c.tobytes()))[0]

            inta += (16384 + 128)

            detokenizer_ngram.append(engdict[inta])

            # increment byte counter with step 3, we processed 3 bytes.
            idx += 3

    return detokenizer_ngram


###INLINE START###

#downloading tokenizer model if missing
nltk.download('punkt')

#opening the english dict of most used 1/3 million words from google corpus of 1 trillion words.
#special characters have been added with their respective prevalence (from wikipedia corpus)
#contractions also have been added in their form with a quote just after (next line) the form 
# without quote. ex : next line after "dont" appears "don't"

file1 = open('count_1w.txt', 'r')
Lines = file1.readlines()

#initializing Python dicts
count = 1
engdict = {}
engdictrev = {}


# special case : byte val 0 is equal to new line.
# TODO : make sure that windows CRLF is taken care of.
engdict[0] = "\n"
engdictrev["\n"] = 0

# populating dicts
for line in Lines:
    # Strips the newline character
    engdict[count] = line.strip()
    engdictrev[line.strip()] = count
    count += 1

### populating ngram dict

filengrams = open('outngrams.bin', 'rt')
ngramlines = filengrams.readlines()

ngram_dict = {}
ngram_dict_rev = {}


count = 0
# populating dicts
for ngramline in ngramlines:
# Strips the newline character
    #keystr = "".join([f"\\x{byte:02x}" for byte in ngramline.strip()])
    #keystr = keystr.replace("\\","")
    #if(count == 71374):
    keystr = ngramline.strip()
    #print(ngramline.strip())
    #print(keystr)
    #quit()
    ngram_dict_rev[count] = keystr
    ngram_dict[keystr] = count
    count += 1

idx = 0
debugw("first ngram in dict:")
test = ngram_dict_rev[0]
debugw(test)
debugw(ngram_dict[test])
count = 0


if (compress):

    tokens = []
    # check if file is utf-8
    if(check_file_is_utf8(infile)):
        with codecs.open(infile, 'r', encoding='utf-8') as utf8_file:
            # Read the content of the UTF-8 file and transcode it to ASCII
            # encode('ascii','ignore') MAY replace unknown char with chr(0)
            # We don't want that, as it is a termination char for unknown strings.
            # on the other hand backslashreplace replaces too much chars that could be transcribed
            # the best option for now it check for chr(0) presence before writing the unknown token representation.
            ascii_content = utf8_file.read().encode('ascii', 'ignore').decode('ascii')
            #debugw(ascii_content)
            Linesin = ascii_content.splitlines()
            if(debug_on):
                outfile_ascii = infile + ".asc"
                with codecs.open(outfile_ascii, "w", encoding='ascii') as ascii_file:
                    ascii_file.write(ascii_content)
    else:
        # Reading file to be compressed
        file2 = open(infile,'r')
        #text = file2.read()
        Linesin = file2.readlines()

    if(gendic):
         if(len(outfile)):
                fh = open(outfile, 'wt')

    lineidx = 0
    for line in Linesin:
        line = line.lower()

        # First pass tokenizer (does not split adjunct special chars)
        line_tokens = tknzr.tokenize(line)
        #debugw(line_tokens)

        if( not gendic):
            tokens.append(line_tokens)
        else:
            compressed = compress_tokens(line_tokens,gendic)
            if(len(outfile) and len(compressed)):
                # write compressed binary stream to file if supplied in args or to stdout otherwise.
                hexstr = "".join([f"\\x{byte:02x}" for byte in compressed])
                hexstr = hexstr.replace("\\","")
                fh.write(hexstr)
                if(debug_ngrams_dic):
                    fh.write("\t")
                    strline = str(lineidx)
                    fh.write(strline)
                fh.write("\n")
            else:
                sys.stdout.buffer.write(compressed)
                sys.stdout.buffer.write(b"\n")
        lineidx += 1
    #line_tokens.append("\n")
    #tokens = tokens + line_tokens
    debugw(tokens)
    
    if (not gendic):

        compressed = compress_tokens(tokens,gendic)

        if(secondpass):
            candidates = compress_second_pass(compressed)
            debugw("candidates:")
            debugw(candidates)
            processed_candidates = process_candidates_v2(candidates)
            debugw("processed candidates:")
            debugw(processed_candidates)
            compressed = replace_candidates_in_processed(processed_candidates,compressed)


        # write compressed binary stream to file if supplied in args or to stdout otherwise.
        if(len(outfile)):
            with open(outfile, 'wb') as fh:
                fh.write(compressed)
        else:
            sys.stdout.buffer.write(compressed)

        for sessidx in range(2113664,unknown_token_idx):
            debugw("session_index:" + str(sessidx))
            debugw(engdict[sessidx])
            debugw(engdictrev[engdict[sessidx]])
            debugw("session_index:" + str(sessidx))

    fh.close()

# decompress mode
else:

    # decoding part
    debugw("decoding...")
    detokenizer = []
    detokenizer_idx = 0

    if(len(infile)):
        with open(infile, 'rb') as fh:
            compressed = bytearray(fh.read())

    idx = 0
    #FirstCharOfLine = 1
    CharIsUpperCase = 1
    #CharIsUpperCase2 = 0
    
    # main decoding loop
    while (idx < len(compressed)):
            
            # write each byte
            debugw(hex(compressed[idx]))

            #if( (idx > 0) and compressed[idx] == 0 and compressed[idx - 1] == 0):
            #find len of consecutive 0 chars

            if(idx < len(compressed) -1):
                if((compressed[idx] == 0) and (compressed[idx+1] != 0)):
                    #FirstCharOfLine = 1
                    CharIsUpperCase = 1
                elif(CharIsUpperCase == 1):
                    #FirstCharOfLine = 2
                    CharIsUpperCase = 2
                        
            if(len(detokenizer) > 0):


                ### VARIOUS DETOKENIZER CLEANUP/FORMATTING OPERATIONS

                #ensure this is not the end of an ngram. ngrams necessarily contain whitespaces
                if (not re.search(" ",detokenizer[detokenizer_idx-2])):
                    # English syntactic rules : remove whitespace left of "!?." 
                    # and enforce capitalization on first non whitespace character following.
                    if (re.match("[!\?\.]",detokenizer[detokenizer_idx-2]) and detokenizer_idx > 2):
                        del detokenizer[detokenizer_idx-3]
                        detokenizer_idx -= 1
                        if(CharIsUpperCase != 1):
                            CharIsUpperCase = 2

                    # English syntactic rules : remove whitespace left of ",;:" 
                    if (re.match("[,;:]",detokenizer[detokenizer_idx-2]) and detokenizer_idx > 2):         
                        del detokenizer[detokenizer_idx-3]
                        detokenizer_idx -= 1

                    # URL/URI detected, remove any spurious whitespace before "//" 
                    if (re.match("^\/\/",detokenizer[detokenizer_idx-2]) and detokenizer_idx > 2):         
                        del detokenizer[detokenizer_idx-3]
                        detokenizer_idx -= 1
                    
                    # E-mail detected, remove whitespaces left and right of "@"
                    if (re.match("@",detokenizer[detokenizer_idx-2]) and detokenizer_idx > 2):         
                        del detokenizer[detokenizer_idx-3]
                        detokenizer_idx -= 1
                        del detokenizer[detokenizer_idx-1]
                        detokenizer_idx -= 1

            if(not (compressed[idx] & 128)):
                
                # current index byte msb is at 0, 
                # it is one of the 128 first tokens in the dictionary.
                debugw("super common word")
                #decode in place
                
                inta = compressed[idx]
                       
                if(CharIsUpperCase == 2):
                    detokenizer.append(engdict[inta].capitalize())
                    detokenizer_idx += 1
                    CharIsUpperCase = 0
                else:    
                    detokenizer.append(engdict[inta])
                    detokenizer_idx += 1
                  
                # print to stdout
                if(CharIsUpperCase != 1):
                    detokenizer.append(" ")
                    detokenizer_idx += 1

                debugw(engdict[inta])
                idx += 1

            elif((compressed[idx] & 128) and (not (compressed[idx+1] & 128))):
    
                # current index byte msb is at 1, and next byte msb is at 0. 
                # it is one of the 16384 next tokens in the dictionary.
                debugw("common word")
    
                # populate bitarray from the two bytes
                c = bitarray(endian='little')
                c.frombytes(compressed[idx:idx+2])
                debugw(c)
    
                # remove first byte msb (shift down the bits above)
                del c[7]
                debugw(c)

                # convert bytes array to 16 bit unsigned integer
                inta = (struct.unpack("<H", c.tobytes()))[0]
                # add offset back so we get a valid dictionary key
                inta += 128
    
                # print word
                if(CharIsUpperCase == 2):
                    detokenizer.append(engdict[inta].capitalize())
                    detokenizer_idx += 1
                    CharIsUpperCase = 0
                else:
                    detokenizer.append(engdict[inta])
                    detokenizer_idx += 1   

                if(CharIsUpperCase != 1):
                    detokenizer.append(" ")
                    detokenizer_idx += 1 
                
                debugw(engdict[inta])
                # increment byte counter with step 2, we processed 2 bytes.
                idx += 2
    
            #elif((compressed[idx] & 128) and (compressed[idx+1] & 128)):
            elif((compressed[idx] & 128) and (compressed[idx+1] & 128) and (not compressed[idx+2] & 128)):
                
                # current index byte msb is at 1, and next byte mbs is at 1. 
                # it is one of the 4194304 next tokens in the dictionary.
                debugw("rare word")
                
                chunk = compressed[idx:idx+3]

                # populate bitarray from the three bytes
                c = bitarray(endian='little')
                #c.frombytes(compressed[idx:idx+3])
                c.frombytes(chunk)
                
                debugw(c)

                # remove second byte msb (shift down the bits above)
                del c[15]
                debugw(c)

                # remove first byte msb (shift down the bits above)
                del c[7]
                debugw(c)

                c.extend("0000000000") 
                # pad to 4 bytes (32 bit integer format) : 3 bytes + 10 bits 
                # because we previously removed two bits with del c[15] and del c[7]
                debugw(c)

                # convert bytes array to 32 bit unsigned integer
                inta = (struct.unpack("<L", c.tobytes()))[0]

                if (inta >= 524416):
                    # this is a ngram.
                    # remove offset to get into ngram dic code range.
                    inta -= 524416
                    debugw("this is an ngram. code:")
                    debugw(inta)
                    # process ngram through ngram dictionary
                    # replace ngram code with corresponding ngram string and add them to the tokenizer
                    ngram_string = ngram_dict_rev[inta]
                    debugw("ngram string:")
                    debugw(ngram_string)
                    subs = 0
                    #(ngram_string,subs) = re.subn(r'x',r'\\x',ngram_string)
                    (ngram_string,subs) = re.subn(r'x',r'',ngram_string)   
                    debugw("ngram string:")
                    debugw(ngram_string)
                    ngram_bytes = bytes.fromhex(ngram_string)
                    subtokens = decompress_ngram_bytes(ngram_bytes)
                    #bytes = bytearray(ngram_string,encoding="ascii")
                    #subtokens.insert(0,"PREFIX")
                    #subtokens.append("SUFFIX")
                    
                    
                    #subtokens = nltk.word_tokenize(ngram_string)
                    # We know there shouldn't be any new lines in the subtokens.
                    # possessives, contractions or punctuation may occur.
                    # we need to add capitalization rules and spaces after punctuation rules.
                    # These should be catched by the detokenizer backward processor (detokenizer_idx -2)
                    # The problem is we append more than one token.
                    # So we should process rules for first subtoken insertion only.
                    # The rest should have inline processing (here)

                    if(CharIsUpperCase == 2):
                        detokenizer.append(subtokens[0].capitalize())
                        detokenizer_idx += 1
                        CharIsUpperCase = 0
                    else:
                        detokenizer.append(subtokens[0])
                        detokenizer_idx += 1 
                    #if(CharIsUpperCase != 1):
                    #    detokenizer.append(" ") 
                    #    detokenizer_idx += 1

                    ngram_processed_string = ngram_process_rules(subtokens[1:])
                    # We shoud take care that the backward detokenizer processor does not mingle
                    # with the the rest of the ngram string.
                    # Such a special token will be the only one to have whitespaces in it
                    # So we can detect it this way
                    detokenizer.append(ngram_processed_string)
                    detokenizer_idx += 1
                                        

                else:
                    inta += (16384 + 128)

                    if(CharIsUpperCase == 2):
                        detokenizer.append(engdict[inta].capitalize())
                        detokenizer_idx += 1
                        CharIsUpperCase = 0
                    else:
                        detokenizer.append(engdict[inta])
                        detokenizer_idx += 1 
                    if(CharIsUpperCase != 1):
                        detokenizer.append(" ") 
                        detokenizer_idx += 1
                    
                    debugw(engdict[inta])
                    # increment byte counter with step 3, we processed 3 bytes.
                idx += 3

            #elif((compressed[idx] == 255) and (compressed[idx+1] == 255) and (compressed[idx+2] == 255)):   
            elif((compressed[idx] & 128) and (compressed[idx+1] & 128) and (compressed[idx+2] & 128)):
            
                #check if Huffmann first

                chunk = compressed[idx:idx+3]

                # populate bitarray from the three bytes
                c = bitarray(endian='little')
                #c.frombytes(compressed[idx:idx+3])
                c.frombytes(chunk)
                
                debugw(c)

                # remove third byte msb (shift down the bits above)
                del c[23]
                debugw(c)

                # remove second byte msb (shift down the bits above)
                del c[15]
                debugw(c)

                # remove first byte msb (shift down the bits above)
                del c[7]
                debugw(c)

                c.extend("00000000000") 
                # pad to 4 bytes (32 bit integer format) : 3 bytes + 8 bits + 3 bits 
                # because we previously removed three bits with del c[23], del c[15] and del c[7]
                debugw(c)

                # convert bytes array to 32 bit unsigned integer
                inta = (struct.unpack("<L", c.tobytes()))[0]
                inta -= 2097151
                # if it is a Huffmann select tree code it will be 0 to 4 included
                # if it is a session DIC it will be shifted in the negatives.

                if (inta in range(0,5)):        

                    # unknown word
                    # end check if Huffmann first
                    debugw("unknown word escape sequence detected, code: " + str(inta))
                    #unknown word escape sequence detected.
                    if(inta == 0):
                        char = compressed[idx+3]
                        stra = ""
                        idxchar = 0
                        while(char != 0):
                            debugw("char=")
                            debugw(char)
                            stra += chr(char)
                            debugw("printing string state=")
                            debugw(stra)
                            idxchar += 1
                            char = compressed[idx+3 + idxchar]
                        debugw("termination char detected=")
                        debugw(char)
                    else:
                        bstr = bytearray()
                        idxchar = 0
                        while(char != 0):
                            bstr.append(char)
                            idxchar += 1
                            char = compressed[idx+3 + idxchar]
                        debugw("huffmann : termination char detected=")
                        debugw(char)
                        stra = decode_unknown(bstr,inta)
                        #stra = codec.decode(bstr)    
                    
                    debugw("we append that unknown word in our session dic at idx: " + str(unknown_token_idx) + " since it may be recalled")
                    engdictrev[stra] = unknown_token_idx
                    engdict[unknown_token_idx] = stra
                    unknown_token_idx += 1
                    
                        
                    if(CharIsUpperCase == 2):
                        detokenizer.append(stra.capitalize())
                        detokenizer_idx += 1
                        CharIsUpperCase = 0
                    else:
                        detokenizer.append(stra)
                        detokenizer_idx += 1 
                    if(CharIsUpperCase != 1):
                        detokenizer.append(" ") 
                        detokenizer_idx += 1
    
                else:

                    inta += 2097151
                    # it is a session DIC, shifting back to 0.
                    inta += (2097152 + 16384 + 128)
                    # it is a session DIC, shifting back session dic address space.

                    debugw("recalled word:")
                    
                    try:
                        debugw(engdict[inta])
                        # print word
                    
                        if(CharIsUpperCase == 2):
                            detokenizer.append(engdict[inta].capitalize())
                            detokenizer_idx += 1
                            CharIsUpperCase = 0
                        else:
                            detokenizer.append(engdict[inta])
                            detokenizer_idx += 1   

                        if(CharIsUpperCase != 1):
                            detokenizer.append(" ")
                            detokenizer_idx += 1 
                    
                    except:
                        debugw("something went wrong, could not find word in session DIC")

                        for sessidx in range(2113664,unknown_token_idx):
                            debugw("session_index:" + str(sessidx))
                            debugw(engdict[sessidx])
                            debugw(engdictrev[engdict[sessidx]])
                            debugw("session_index:" + str(sessidx))


                idx += 3 + idxchar

    debugw(detokenizer)
    if not(len(outfile)):
        print(''.join(detokenizer))
    else:
        # write clear text to file if supplied in args
        with open(outfile, 'w') as fh:
            fh.write(''.join(detokenizer))
    

Audio shelving filter with resonance.

This shelving filter was obtained serendipitously by combining an op-amp integrator stage (bottom) with an all-pass filter stage (top). The top stage output is the filter output.

I have not done the analytical determination of the transfer function down to the resistance and capacitance of each component. If you wish to do so, This could be done either through FACT – fast analytical circuit techniques, or automatically with some Ltspice adjunct program like SLICAP.

Thus the behavior was mostly determined empirically by sweeping all resistances and capacitances, with resistances replaced by potentiometers.

It exhibits resonance. Most shelving filters do not exhibit this feature. It may be desired in some cases.

It can also switch between a high shelf and a low shelf behavior.

Control is a bit touchy as the filter shows a certain degree of inter-dependence with respect to some potentiometer effects. (Certain potentiometers affect several characteristics at once)

Schematic
wiper_fb sweep – filter output is top graph
wiper_LP_gain sweep – filter output is top graph

It requires the TL072 model for the op-amps.

There is a wave input if you wish to test it on a wave file. The filename is input.wav and the output filename is output.wav. Connect the “in” net instead of the V5 source and change the simulation to .tran

Don’t forget to set the transient analysis to the length of the input wave file.

The zip below contains the ASC file and some screenshots of the frequency response while stepping some potentiometers.

Have fun!

DOWNLOAD :

A broad assessment of the e-waste management problem and pragmatic strategies to increase recovery valuation.

STATUSDRAFT v0.820 Sept 2023
  • With emphasis on integrated circuits & SMD components & large sub-components classification & recovery strategies and process improvements.
  • With emphasis on the challenges of LIB (Lithium battery) direct recycling methods, subcomponent recovery, economic viability considerations, and some proposals for automation, vs. indiscriminate indirect methods of crude matter recovery.
  • Comparison of various business and logistics models of e-waste recycling operations.

The situation

The most daunting aspect of the e-waste problem is export to countries of cheap labor (mainly Africa), where recycling operations are done in landfills. e-waste ‘leachate’ contains heavy-metals, flame retardants, plastics,PCB residues, glass, that find their way in the water table, down river streams, or as particulates in the environment, it also contributes to the microplastic issue and plastic fragmentation, And takes it toll of lives and health of people working in atrocious conditions.

This practice is known as e-waste dumping, and account for a large part in the logistics flows of e-waste around the world.

The main issue with e-waste recycling as it is done today is that it is an energy intensive, investment intensive, spatially extensive, logistically complex, process to be done properly to minimize environmental impact. That means high cost, and the sheer number of devices turnover would require a large investment in e-waste facilities, even with re-use and refurbishing taking off load. Add to this mix planned obscolescence, lower quality standards, and limited right to repair, and the non-modular aspect of modern devices, and you have a perfect mix for a disaster.

This means that addressing the problem would require the invention of an economic model of quality vs quantity, and no planned obsolescence, and guarantee of right to repair. This would mean higher purchase costs for new devices, or a subscription model, or reduction in range and differentiation of products by corporate entities, and a lower sophistication functional level, or at least a reduction in the pace of innovation, versus investment on solidfying current designs.

However, this would not eliminate e-waste, What could be proposed however, is to make e-waste processing a viable activity, mainly by fine granularity recovery down to the IC level, and large sub-component level (such as inductors, large MLCC, on-board transformers), etc… which could then be remarketed as second hand IC, For now, second hand ICs are proposed at a modest level by Chinese resellers. We will investigate techniques to make the operation large scale.

Major e-waste flows : producers and countries of destination

Formal definition of e-waste and common management strategies

Definition

The first challenge is to define e-waste. Most devices, even electromechanical, have PCB control boards. We could by that definition, categorize a washing machine appliance as e-waste. We advocate for a more precise defintion, of a compound system that contains a part of e-waste, “e” being for ‘electronics’, while the rest being electromechanical, hydraulic (any system containing a fluid) or pneumatic (any system containing a compressed gas) in nature. An HVAC unit would be an example of that kind of system, Casing / Enclosure is also taken into account as it has potential value for metal enclosure, whether ferromagnetic or not, and potential environmental impact for plastic as well as lower value, which is problematic. Plastic fragmentation into microplastics is one major issue of such waste.

A sensible definition of e-waste would then be any ‘part’ of a larger system that has been separated from its chassis, whether metal or plastic, and is mostly electronic in nature, with the addition of screens and batteries.

With that defintion in mind, we would have “core e-waste” with “e” standing for electronics, and “broader-defintion e-waste” with “e” mostly standing for “electrical nature” e-waste.

Common e-waste management strategies.

We will focus on the broad strategies that make up e-waste recycling taking into account three factors : scale, overall complexity, and granularity (deepness).

Small vs Large scale operations.

Small scale operations are typically those that treat quantities less than <check the scale of operations for small scrappers> on a surface of less than < >. Given the low quantity and longer timer per device, these operations may provide a good amount of recovery granularity, However they are mainly low-tech and cannot decently manage IC recovery or complex logistics requiring computer oversight, AI etc…

These operations may recover Ag/Au from PCB scrap, gold plated connectors, etc… as to provide the major source of income for the operation. They frequently use the hydrometallurgic processes (acid baths) that have severe consequences on the environment if byproducts (leachates) are not treated and disposed of properly.

Current Large scale / bulk operations.

Current large scale operations are tailored to process a large quantity of e-waste indiscriminately (without much manual labour pre-processing, except for re-use and refurbish assessment steps in some operations). They use large shredding equipement, magnetic separators, sieves and filters, shakers, ball mills, hydrometallurgic processes and pyrometallurgic processes to recover heavy and precious metals as a main source of profit for the operation, as well as scrap byproduct.

Current large scale / bulk operations usually are performed in places of high device consumption, low reusability incentive or culture, high cost of human labour or lack of workers, compared to automation. These processes involve early shredding, no case openning. ferromagnetic separation is possible, separation of non ferromagnetic metals from plastics and other components can be aided through shaking, but will frequently require human intervention near a conveyor belt. Ex: Middle East

Networked vs Unified operations

Certain operations, like pre-staging into device categories, such as bins for laptops, bins for smartphones and health assessment of the device, are best performed at locations of highest density of use and consumption of devices, which are large metropolitan centers, in the case of small e-waste. (excluding appliances) These are better performed in networked/franchising grid style operations, sharing the same guidelines and practices across points. This is to account for the limited amount of real-estate per collection point, as well as risks of stockpiling devices with non-passivated LIBs

Unified operations, where a large quantity of e-waste is sent to a large center serving a large geographical area, is a process better suited for larger size and varied e-waste, such as appliances, as well as to serve business needs (BtB).

Possible hybrid operations.

An hybrid flow would put a large emphasis on the circular economy ethos, promoting re-use, refurbish and repair operations, as well as granular (deep) component extraction in device dismantling processes, while keeping the operation large scale, with the presence of the same machinery that of current large scale processes. (shredders, screens, sieves, magnetic separators, hydrometallurgic baths)

Granular operations promote dismantling into subparts without inducing damage through process expertise, such as proper tool use, to keep the recovered part value intact. The recovery granularity ceiling is mainly dictated by the recovery time/value ratio per part.

Also, a major advantage of a properly staffed dismantling operation is the identification (and removal) of any component that might pose an explosive or incendiary threat to machines downstream.

Certain down level operations such as IC/SMD recovery could be rendered profitable through the use of automation combined with deep neural networks, and some industry regulatory advances to help in that effort.

The main issue with an hybrid operation is the granularity recovery, re-use, repair, refurbish assessment induced bottleneck for downstream processes. Dynamically computed recovery thresholds (based on IC value assessment) should be used, dependent on e-waste feed flow, so as not to starve the downstream processes or create inflow stockpiling.

Finally, the process of recovery of heavy and precious metals (whether through hydrometallurgy or pyrometallurgy) downstream could be rendered easier in granular processes, where a substantial amount of high level components have been already stripped from the printed circuit board, and categorized. thus creating potential tailored processses for these components, such as “SMD sand” vs “printed circuit board” processes.

Collection Strategy : The start of the journey.

Let’s first list all existing waste collection strategies, regardless of type, to see how e-waste collection usually integrates or could integrate into the existing infrastructure.

Collection at home in a garbage bin. These are used for common waste. 4 different bins are mostly used,

One for plastics and metals, one for paper, magazines and carboard, and one for commercial glass (glass containers, not kitchen glass). The last one is for undifferentiated waste (plastics foils, small plastic bits, dust, organic matter. Recently, organic matter will be given a specific collection place (composting bin).

This method is in place for high density residential areas. Usually one set of bins per house or residential building. Larger building use larger bins and specific rooms to store garbage. These are taken on the street by janitors/contractors before collection time and garbage trucks collect garbage, then put bins backs on the sidewalk. Building janitors or contractors usually place them back once empty, and clean them when required. Distance to bin no more than the building height or terrain size.

Collection per Residential block or area. Same as above, but the 4 bins are on the street. The bin density (Bins/inhabitants) or (Bins/area) is usually kept constant. These are priviledged methods for lower density urban settings. Distance to Bin ranges from 10m to 150m max.

Sometimes, there is a per housing unit infrastructure for general waste (3 or 4 categories) and a collection per block or area for specific type of waste : Ex : Clothing, organic material, glass bottles and containers.

Garbage Chutes. Once popular, these make waste classification harder, since all waste going through the cute goes into the same bin. They also promote pest and disease proliferation.

Pneumatic network. Experimental technology, deployed in a residential area of new-york. plagued with many problems such as high maintenance costs and downtimes due to pipe blockage.

Specific Day of collection operations. Usually reserved for furniture or appliances, A contractor for the operation or municipal services will take any waste left on the street from authorised waste categories. Some jurisdictions require the waste to be tagged so as to prevent third parties from dumping waste into another sector for convenience, as each sector is manage on a rolling calendar.

Collection on demand. Furniture or appliances, or other types of waste are put on the street or taken from the door by contractors or municipal workers at a specific time on request of the user.

Mobile Collection. A mobile collection vehicle (usually a small to medium truck) stays at some point in the city for a certain amount of time to perform collection from passing by users. Presence is made known beforehand through Public Address sound system or website and other means.

Collection by Mail. Usually for small items like smartphones, sent through mail to the recycling operator. Mail costs may be significant unless the activity is subsidized. Plus, the user must have a money gain or incentive to send the item through mail (requires a trip to the post office in some countries), And needs a printed document acting as reference of the transaction added to the shipment, All this requring a larger, padded enveloppe than for regular mail.

Collection by Scavenging or Area cleanup.

Waste finds its way into the general environment due to improper disposal. Community cleanup operations are usually performed from time to time over large swaths of terrain to remove accumulated waste. Some environments are prone for waste to be buried under soil, which makes cleanup and detection harder, Which implies a minimum number of cleanup operations per year.

Scavenging operations also make a large part of the ecosystem of scrapping businesses, Where scavengers scan areas for scrap, bring them at private recycling centers and then collect some cash.

Collection at Point of Sale.

Same as collection per area, but instead of tying the collection bin(s) to a municipal or public area, it is situated inside the premises of shops, up to large stores, ex. inside malls. These methods ensure better protection of the collected waste against degradation, unauthorized access or theft.

Collection at equipment renewal.

Whether at home for large appliances or at point of sale, old equipment that is being replaced by a new one is taken care of.

Collection challenges specific to e-waste

Bin collection of e-waste without sorting is hazardous mainly because of the presence of batteries. Lithium cells can catch fire, explode and trigger a chain reaction in all the collected waste, produce noxious fumes, and burn the place of collection down. Main culprits for fire initiation are : Litihum battery being exposed to humidity in air, electrical shorts at terminals, integrated BMS (battery management system) failure, high temperature exposition, poor quality of battery cells. What makes the issue worse is the difficulty to remove certain device batteries for the device user at this step.

That is why extraction of battery cells should be one of the first steps in any collection process.

However, extraction and concentration of battery cells still displaces the fire hazard to another place, albeit one where preventive measures and reactive measures can be implemented better. One measure would be to store battery modules in an inert liquid buffer such as oil or other inert liquids. This would limit the potential contact of Li electrodes to air and moisture and inhibit Lithium oxidation (Lithium fires).

However, this would not protect the cell against internal shorts between cathode and anode, Which could lead to localized high temperature spots with high temperature rise speed within the medium, that can propagate through thermal runaway, and release gases.

Lithium fires are very exothermic. Temperatures may reach upwards of 1000°C

Even a steel vessel with melting points around 1400°C could suffer structural damage if exposed to an uncontrolled, chain reaction of its contents.

This means that the storage density of Lithium battery elements has to be kept low (Lithium to inert fluid ratio) by mass to provide a sufficiently large thermal damping effect.

There is hope however that continuous improvements on more stable Lithium based battery technologies such as LiFePO4 will reduce the risk of such occurences. Innovation in the battery territory is a double edged sword, it can make derived technologies more mature, but also bring to the market novel ones (in the search of ever increasing energy density by weight or volume) that are not so well explored in term of stability.

Regulatory requirements and quality control are also one tool, but enforcement is difficult when the industry is overseas. Local or regional production is therefore paramount for regulatory requirement to be enforcement and to make EV fires a thing of the past.

Given the pervasivity of batteries, fire and explosion risks, continuing rising demand, and a Lithium recycling field that is not yet mature (where profitability has not been met yet, but where risks dictate the urgent need for investments in that field), We will discuss about the state of the art of battery recycling in Chapter <>

Classification by size and type, inter-center flows.

Whether the unit should be sent to a standard recycling center (as for an appliance) or an e-waste center would primarily depend on the weight of e-waste part to total weight ratio. Once stripped of its e-waste part, the e-waste would be sent from the appliance recycling center to the e-waste specific center. Large appliances are usually either collected by municipal services or at a sale of a new device in large metropolitan areas. Disposal in the street at specific dates, does not guarantee collection by the designated company or municipality, and some e-waste maybe collected by enthusiasts or individual recyclers, or homeless. This is a specific re-use paradigm and has its pros and cons, since it offloads the reycling center and provides raw material for people that perform collection for re-use. For e-waste, the risk is collection by entities that do not perform re-use or repair, but operate small e-waste recycling operations using noxious hydrometallurgic processes, mainly for precious metal collection and without due regard to safe environmental practices.

Optimization at collection and off-loading

For dense urban areas, Small space collection centers are used as buffers for small and ubiquitous e-waste are required for logistic optimization, and pre categorization (refurbish,repair,dismantle)

There is also the concept of ‘resource center’, where partly dismantled objects or appliances, are re-used by DIY enthusiasts. However, these are a small scale operations and are not oriented toward ubiquitous e-waste. They do not contribute much to reduce the causes of the e-waste problem, mainly because components in ubiquitous devices such as smartphones, unless modular, cannot be reused in any sensible way by any commoner. Even DIY electronicians had to adapt by purchasing specific devices like microscopes to perform their repair work or prototyping work in the current high density, highly integrated, small footprint SMD world.

As for the e-waste generated by the automotive industry, same applies, the control boards are e-waste, and automotive processes have had more and more reliance on digital components, such as for automated or assisted driving. This trend is not without issues, as it may “brick” a vehicle that would be otherwise sound on the physical and mechanical level, while waiting for repair or a specific part, sometimes for quite a long time. Given the tense market for automotive IC, It follows that recovery of ICs sourced from automative e-waste should be attempted.

However, automotive IC are considered critical devices, which makes the effort of recovering them – sometimes in an undetected defective or fatigued state – a direct contradiction with the safety objective of the industry, which requires new & quality components. Although, it would be still possible for them to find a new life into the non critical consumer market.

Case study : User journey to the recycling center.

We’ll discuss the flow of e-waste into a general purpose recycling collection center with proper e-waste management capabilities.

In this particular case, the user brings waste such as appliances to the recycling center, as well as some higher density e-waste.

Proper information about accepted waste is important and should be found on the institutional or business website of the collection center.

Depending on the country and region wealth, certain collection and sorting center incentivize collection by rewarding the scrapper. price per kg of category of waste should be clearly visible on site and on the website. These are usually private businesses.

Also, in that model, components requiring disassembly (appliances) are usually bought by scrap centers at less than individual parts. This incentivizes scrappers to perform prealable work and bring already disassembled appliances, so they earn a little more.

These dissasembled parts are usually bought at the price for the metal type or broad classification. (Copper, Zinc, Lead, Steel, Electrical motors and windings, PCB)

It is important that whenever possible, such as in new developments, these centers recover a broad range of equipment and have adequate space for proper sorting into major device categories as well as sheltered space – protected from rain and temperature extremes.

The advantage of protection from rain or temperature extremes, as well as condensation, is to safeguard the value of electronic components, IC, and reducing the risk of shorts and battery damage. etc… particularly if the intent is to apply a granular recovery process downstream.

As for electromechanical devices such as functional electrical motors, they are protected from corrosion when stored indoors. Pneumatic devices or lines containing refrigerant gases should also be protected from water ingress that could corrode the lines.

There is also the phenomenon of galvanic corrosion which is fostered by storing unsorted metal scrap.

Finally, one should know that scrap sorted by type will find its way into furnaces. Feeding wet scrap into furnaces is extremely dangerous as water expands fast, and into a large volume. Resulting steam explosions are deadly and may destroy whole plants.

To summarize the e-waste specific points of these general collection centers.

-Non high density e-waste (smartphones, laptops) centric, but should nonetheless accept them.

-Low density to medium density e-waste, such as appliances containing a relatively large and complex electronics control board, like washing machines, HVAC units, boiler controls, should be stripped whenever possible, of these module circuit boards, in a non destructive manner, specially if the SoH file indicates that the issue may be mechanical rather than electronical.

-They should have a sheltered and controlled environment to store high density e-waste and preserve their state (they may be working)

-Whenever possible, a state of health information should be filed for each device. Users should not be incentivized with higher rewards if the device is working or partially working, as this may encourage fraud.

-these general purpose collection and sorting centers should be offloaded of high density e-waste, in particular those containing LIB batteries, in a timely matter to prevent improper stockpiling conditions.

State of Health : User assessment of e-waste prior to collection

A first step that could involve the user of the recycling center, would be to provide an assessment of the state of the device : that is, is the device functional ? If partly functional, what is the issue ? If not functional, how did it stop functionning ? User can also input additional data in notes about the device.

This valuable information could be the basis of algorithmic / AI decision of process destiny for the device

  • Refurbishment & re-use,
  • Repair center for in-depth assessment – (for high value devices),
  • Processing as e-waste for granular or crude recovery

As this article is focused on e-waste processing, we will discuss the third option. Routing to processes 1 & 2 should be done in priority and frequently “just in time” to safeguard the value of the devices

Depth level of e-waste sorting at the collection center

We will now focus on e-waste destined to be dismantled for metal recovery or granular, “component” recovery.

E-waste sorting at this point should be done in mind with the downstream transport logistics that await the e-waste feed, Train, TIR, Maritime, containerized or not. Mostly, e-waste will be stored in pallets.

E-waste is varied in terms of transport friendliness. At this point, we are more concerned with achieving high storage density of waste, as well as keeping waste in a safe state for transport, while limiting the damage – such as IC or component level recovery may still be attempted.

Since e-waste sorting & palletization would be attempted after filing and the 3R decision, it should be done by workers of the collection center, as they know better how to sort and arrange devices in pallets.

Whenever possible, the model of these collection centers should involve a front desk and an out of bounds area for customers, specially regarding high density e-waste, so as not to incite theft.

Types of e-waste categories pallets

Besides transport considerations, sorting should take into account the downstream processes that will await a particular type of devices, as they may be already industry tailored in the recycling plant.

Follows an example of palletization, from high value to low value :

  • Entreprise IT equipment,
  • Automation, Industrial
  • Medical (med field should have their own dedicated recyling sector)
  • Trade equipment, instrumentation

Consumer grade :

  • personal computers
  • removable battery packs (tools, laptops, e-bikes)
  • Hi-Fi, sound and video equipment
  • smartphones, laptops, tablets, these devices contain large LIBs – high power density – so they present higher hazards
  • Screens and TVs
  • Printers , MFP, scanners – quite low density
  • Low value gadgets : vapes etc… hazardous because of questionable manufacture practices and presence of hard to remove LIBs

The problem of LIB removal.

A safe transport practice is to remove the LIB from the device to prevent spurious turn-on or in general any path for the battery to discharge in a non controlled fashion. Since at this point, the decision to send the waste to component recovery is done, the battery won’t serve much in the device. Thus it is preferable for devices that allow the battery to be removed without large efforts, to remove the battery and palletize them independentely. Care should be taken to cover any exposed terminals, and do not overburden the pallet and expose the batteries to crushing forces.

Significantly higher risk equipment are smartphones and gadgets fitted with LIB pouches : their manufacture processes are questionable, their form factors make proper palletization hard, with bulk treatment being the only option. all of this makes the risk of spurious turn-on from an exposed depressed on button relatively high, and the thermal risks that go with it. Short circuit events are also more frequent in these devices. We recommend either early inactivation of these devices through shredding and hydrometallurgic processes in-situ (a specific sub-plant for gadgets close to the collection center) or to devise a safe but cost effective palletization technique (such as using a sand buffer in the pallet)

Alternative collection strategies

At home collection initiatives, mobile collection initiatives, as well as device specific collection buffer centers in large metropolitan areas should be encouraged, as these urban environments are a more prone ecosystem for SoH assessment, refurbish & repair operations.

In that regard, There is also a silent issue seldom addressed of home and business stockpiling of e-waste, due to the lack of e-waste targeted collection initatives in certain developped countries.

At the end, Whatever the collection strategy is, sorted e-waste flows destined for recycling are merged at the e-waste recycling center.

E-waste deep recycling

We will now focus the majority of the article on assessing the feasability, economic soundness of “deep” or high granularity recycling operation of the major component of e-waste, that is, the printed circuit board while it is still populated and not shredded, In the case where refurbishing, re-use or repair is deemed impractical, not economically viable, for several reasons such as obscolescence of major damage.

Inbound flows.

We assume that at this point the inbound flows are adequately sorted following the above categorizations or the categorizations required by the plant to accept the shipment. High value categories will be prefered for deep recycling, as they usually contain rare and valuable IC. Less valuable stock would be more or less subjected to direct shredding, after manual battery and screen separation when applicable.

Case / Enclosure opening.

There are two main case and enclosure fastening systems : screws and plastic clips. Screw based systems are easier to service and are more robust. Plastic fastening using clips and notches are sometimes even designed to break to limit serviceability and reusability, On the other hand, tight plastic enclosures may provide a better level of protection to water ingress and other user damage. Whatever the method, opening the case takes a non negligible amout of time, and goes as follow for screw cases : identify screws slots. some may be hidden under “warranty void” stickers or “rubber pads”. Black screws and black slots make visual identification slower. some slots are sunken and require specific screwdrivers and identifying the screw head type is harder. Another big hurdle is the multiplication of screw head slot designs. While these deter non qualified users to tamper with the device, they also make servicing harder, with the need of frequent tool tip changes.

Once all screws are removed, some cases require a specific motion to fully remove the enclosure, or one part of the enclosure.

Enclosure opening methods largely depend on the category of the device. Devices of the same category usually employ the same patterns. Ex, smartphones require ungluing & removing screen with molybdnenum wire, and rarely for consumer models screws. Most have no serviceable battery as an ongoing trend <characerize>

Laptops use screws and are intermediary in terms of serviceability. One could think of the optimal number of screws required to preserve tightness, but limit opening time. Some practices that seriously limit serviceability are : overtightening screws (too much torque applied by the manufacturer). Use of threadlock compounds. Although these methods are rare, they should be known has unfair practices.

Acess to a personal computer build (using tower enclosures) is fast, usually limited to a couple of hand removable screws or a pull lever.

Entreprise servers and Entreprise equipment designed for frequent part replacment, upgrade or maintenance are the fastest to open. They are also quite easy to depopulate.

The Most general layout for devices containing PCBs, besides smartphones and laptops, is a top cover, removing the cover gives access to the PCB (which may itself be an ensemble of smaller modules over a mainboard), a set of connectors between PCB areas and daughter PCBs and to external control and output/input ports.

Cables and connectors

At this point the usual dismantling process is concerned with cable and connector removal. This step should be investigated in detail as some connectors are very specific and expensive, as well as a source of copper, gold plating, or silver.

Specificity of cables and connectors is often required by the application, particularly for shielding and EMI compliance, precise impedance control, RF specificity, and density requirements.

They are also a way to enforce customer adherence to the brand for parts (ex : USB / Lighthing) with a format incompatibility and protocol divergence. These issues are currently being resolved in the EU.

At this point, the operator or the AI/ Deep network should identify high value cables and connector assemblies. Ideally most data cables and ribbons would need to be preserved intact on both sides, or at least such as the most of the cable length is preserved (if soldered to the board on the opposite end to the connector). The idea here is to reuse the whole connector / cable assembly on the same device or a similar compatible device, for this effect cable length should not be trimmed.

Small AWG*, very specific and very dense cables cost much to produce while containing a low quantity of copper compared to the plastic sheat, It follows that these must be kept intact whenever possible.

On the other hand, power cables forming buses are less valuable in term of being kept intact. plastic sheat/ copper separation is easier, and they usually contain a larger copper to plastic sheat ratio.

<insert info on cable recycling, separation of copper from plastic>

Usually a recycling operation will send cables to a specific recycling plant or a specific unit of a larger recycling complex, To sum up, cables a proposal for sorting would be :

-Cables, cut on both ends.

-Cables with connector on one side.

-Cables with connectors on both sides, non damaged, resellable as old-stock.

The issue of cable entanglement.

One would assume that the large number of references makes binning all cables together a better option for classification at a further step. Alas, stacking cables of different lengths, types, diameters, rigidity in a single container gives rise to entanglement.

For this reason it’s better to perform classification early.

Thermal management components : Heatsinks and Fans

Whenever possible it is preferable to remove heatsinks and fans before atempting to extract the main PCB from the enclosure, and particularly if the heatsink assembly is heavy, as lifting the PCB with the heatsink could damage the PCB and components therein through bending.

Aluminum heatsinks are valuable components, as they are specifically cast, as well as copper heat pipes, and copper inserts, and to a lesser degree, fans, as these are subject to wear from lubrication loss, bearing wear, and dust ingress. Whether they should be sold as scrap and melt or recirculated as heatsinks requires depends on updated market analysis, logistic network, and area/country of operation.

Some heatsinks are hard to remove without damaging the underlying IC, as they may use a thermal glue compound, screwed designs are the easiest. Shark-fin heatsinks, such as those used for MOSFET cooling, are usually soldered to the PCB through slots. Removing them requires high power irons, given the large amount of solder mass. A destructive process can also be tried such as a specific metal saw, whatever works the best.

PCB Extraction

It is important at this point that, all modules, mezzanines, connectors, daughterboards, heatsinks that give the electronic device a “3D feel” be removed. This is because automation of a granular recovery process would be severly hindered if it has to travel (the router) in an environment with obstacles. It would have also great difficulty in removing components from boards at 90° relative to the mainboard.

A major automation challenge is extracting the populated PCB from the enclosure, as quite a lot of products are designed to be unserviceable or hardly serviceable and require hand disassembly. This a major bottleneck. As for larger units, a robotic preprocessing arm dedicated to unscrewing could alleviate the burden. A set of suspended tools could be available at hand for hard to recover units, for cutting the case open, such as angle grinders, and other specific tools such as suction cups.

The next issue are internal screws, daughter and mezzanine boards, connectors and cabling.

Internal screws could be managed robotically, except for hard to access zones, while connectors are usually simple to disconnect and could be processed by hand.

Once the PCB has been extracted, we would move into the next step.

PCB Component state assessment.

A first look using deep learning would involve identifying fluid, heat or mechanical damage (bent surface) to the PCB. A heavily damaged PCB or one that has been exposed to liquid damage may have a large proportion of its components in a non recoverable state, prompting for sending the PCB to shredding operations directly.

IC identification and individual IC value assessment

Identification of PCB source and function is a process reaching maturity technology wise. Once the source and function as well as individual IC identification is done for non damaged PCB, IC components would be sorted by decreasing price tag. Long manufacturer lead times for new components should also be taken into account into the characterization,by aggregating lead time data of major resellers through their API, as this metric may not be always reflected in the price. The optimization of the IC deemed salveagable vs time consumed for extraction (robotic burden) would then be computed.

An assessment of the robotic burden should however be done to ensure timely processing and no runaway energy costs, however, the recovery of high value ICs may warrant the investment in extensive robotic resources and energy expenditure, The total e-waste pollution cost in term of ecosystem damage should also be taken into account in this assessment. A net profitable process from high value IC recovery would further warrant the development of these techniques, provided that the robotic resources and maintenance costs involved do not offset the gains from an ‘entropic’ perspective.

Neural networks have been trained to ease identification of whole PCB from databases.

Printed Circuit Board identification using Deep Convolutional Neural Networks to facilitate recycling

https://www.sciencedirect.com/science/article/abs/pii/S0921344921005723

We will first discuss the optimizations that can be taken to limit the neural network burden.

  • Use of QR codes to ease identification of ICs, and augment data density.
  • Creation of a Chip ID addressing space for that purpose.
  • Definition of a size footprint limit under which QR-code assignment is not required, as it would be counterproductive, or not feasible due to IC tag printer limitations.
  • Definition, ‘a minima’ of the need of a QR-code manifest printed on the PCB. The challenge being to find a print zone on high density PCB with (low free real-estate). This manifest would produce the bill of materials and device map for the device, either in place through the data, or externally through an hyperlink.
  • This requirement may be challenging in terms of Intellectual property issues. The major decision all industry stakeholders would face is either to define a manifest that gives public access to the BOM and IC map (routing is not important, which would limit IP damage), or only to certain parties.
  • Provisions would need to be made to require restricted access channels to the manifest to BOM linking database, for IP protection or MILSPEC fields.

It is proposed that a public and restricted database coexist, to which manufacturers may chose which category a device belongs to, or opt out entirely of the public database, or guarantee public access.

A public access manifest would undoubtedly be a major boon to the right to repair initiative, which we support.

IC extraction and binning.

A PCB under inspection would have its high value ICs extracted in a reflow process, as well as robust components such as inductors, transformers, and other non IC high value components.

Here, we have to take into account of the various packages of IC. SOIC like have accessible pads, while others have pads under them (these are “balled”). The total amount of reflow heat required would be PCB dependent, zone dependent for partial extraction (applying localized reflow heat), as well as taking into account the reflow recommendations of the manufacturer to prevent compounded wear and damage.

Since extraction would be performed under heat, the IC suction cup arm performing the lift of the IC once it is unsoldered should be temperature resistant. This could be a problematic constraint.

Another approach, would be to apply even reflow heat on a suspended PCB, until most, if not all, components fall into a recovery tray from a height that minimize pin damage. The tray would then be extracted from the reflow oven, subjected to a reversal action over a complementary tray, “cake unmolding” step, such that ICs topside is visible. IC ident systems (QR-code and/or OCR) would then pick and place valuable ones into a conveyor belt, this pick and place process would happen outside of the high heat zone.

Extracted ICs would then be sorted. This process would involve a highly reflective electrostatically protected conveyor belt, on which the extracted IC by the robotic arm would travel, and be subject to QR code identification (or OCR)

A minimum travel separation between IC should be guaranteed to avoid false binning.

A set of bins would lie next to the belt, with one bin per IC type or some broader categorization. The issue at this point is to pick the IC from the converyor belt and place it into the product bin without damaging the IC. A suction cup pick and place would be the most sensible idea, but the whole belt would need a decent number of arms that travel along the belt performing pick and place operations. Keeping the process real time would need one arm per bin. In that case the most straightforward pick and place motion is sufficient (down (pick IC) – up – right (perpendicularly) – down, (place on tray), up, left) and repeat. A vaccuum line manifold would allow powerful IC holding even for small footprints.

IC could be placed on a chocolate bar tray, but these are usually matrixes with several lines and columns, adding one additionnal travel motion dimension for the pick and place arm. The advantage of these trays is that they are not sealed, and subsequent access to the IC is easier for testing.

Alternatively, The arm could place the IC inside a standard plastic reel, that would be wound on reels at the end of the perpendicular lines but these reels are sealed, Which means the IC would be resold at a large discount (no guarantee of quality).

For larger mass, larger footprint, high value components such as CPUs or GPUs specialized robotic pick and place arm would be used, and place the IC on ESD mats.

DIP components would probably require pick and place due to the risk of tangling of the leads, and pushed into a foam mesh. Since those use a through hole method, they would be subject to a specific process, that we will now discuss

Special case of through hole PCB or hybrid SMD / Through hole.

Certain high power components, such as large inductors, transformers, MOSFETs are through-hole components. These components do not have solder pads, but leads, that is metal solderable legs that go through a hole in the PCB. This hole is padded for proper soldering at the extremities, and have metal inserts to create a proper ‘tunnel’ structure. solder and soldering operations may be applied on the PCB side with the protruding leads, with flux aiding the solder to flow properly into the hole, or on the other side, or both, but this is less common.

A value assessment should be done for through hole PCB first.

Broadly speaking we can categorize through hole PCB into these categories :

-Obsolete or Legacy equipment card boards (industrial controllers, etc)

-‘Vintage’ equipment, that still retains high value due to the presence of IC that are out of production, or has museum / historical value

-Power electronics (SMPS, UPS power boards)

-Old gadgets.

Case 1 and 2 are interesting to study, as some systems in the world are still dependent on sourcing some of these boards for continued operation. Market demand (on Sites such as E-bay or industry specific forums) should be automatically (AI) searched prior to any further processing of these boards, as they could be of interest sold “as is” by specific industries.

Types 3 and 4, are probably best sent to shredding and metal recovery. We should note that older PCB typically have higher noble metal content that modern ones, making them valuable in that regard.

Extraction of components is typically harder for through hole components, as more energy has to be deposited on the solder joint, and heating may have to be performed on the bottom layer, while the component is lifted mechanically by a robotic arm or suction cup on the top layer side.

One simpler and expeditive method would be heat & press & shake : the PCB would be held above a collection surface, horizontally, with the bottom layer on top and the (majority of) components hanging down.

The bottom layer would be then be subjected to reflow oven heat from above in a controlled manner, while the components are protected from excess heat by the PCB providing thermal shielding. A shaking motion or vibration would ensure that components fall down in a padded tray by displacing the liquid solder joint and providing mechanical energy. Another option would be applying a hot plate on the bottom layer, but this could pose issues with non flat boards : containing extraneous componenents on the bottom layer, cables, etc… Applying a hot plate would however provide mechanical action by pushing the lead stubs into the through hole and could achieve faster component recovery than the shake method alone. The hot plate method could also generate more noxious emanations from plate contact with PCB conformal coatings or other finish compounds, compared to reflow methods. The hot plate method could also have the undesirable effect of bending the pins of DIP components that are scheduled for recovery.

Heavy components would be privileged, and that is good, because granular recovery of through hole components is mainly dedicated to large inductors and transformers, that have substantial value.

Specific ICs, such as DIP packages that are out of production for a long time, but still thought after, may require additional pulling, as the DIP leads have a tapered “Y” form that provide mechanical “jamming” when installed, independently of solder presence.

Component recovery.

Falling components should fall on padded material and fall travel should be moderate, while allowing robotic operations underneath.

High frequency switching transformers, or small form factor transformers in general, may have small gauge enameled copper wirings that can break loose from the through hole lead, be from excess heat or mechanical action, and be subsequently difficult to reattach, rendering the transformer useless as is. These should be handled or processed with care and the process prototyping should examine recovery performance on these components.

Most other components such as through hole diodes and resistors and BJT are not reusable, and will go as shredded waste.

It should be known that a SMD component can be made to look new, while it is not the case for a through hole component, as the lead will only have a remaining stub, compared to a 2 to 3 cmd lead when new.

Nevertheless some though hole components could still be investigated for re-use, such as :

-Supercapacitors

-Very large electrolytic capacitors or high voltage / large capacitance ones.

-Sought after triodes (vaccuum tubes)

The issue of ESD protection

Quality Checks & Testing

Manufacturers have testing capacities for the IC they produce. For high value ICs, a testing process offsite at the Original manufacturer facility is one option. This could however have a large logistics burden.

For in premises testing, the step should be done prior to inventory, it is possible at this stage to identify bent pins, ball issues, etc. Capacitive and resistive testing of pins by hand is a labour intensive process, Functional testing is even more time consuming.

This would mean that the better option would be to sell at a discount, and inform the customer of the rate of DoA for these components, or offer the customer compensation and protection mechanisms at purchase.

A statistical database of working IC vs dead IC by chip ID should be populated on the basis of the tests, whether done on site or through customer feedback.

Limitations of IC reuse

Besides regulatory requirements that enforce the use of new components with graded quality, thus limiting the market for second-hand IC, There are specific cases that hinder IC re-use, such as :

  • IC identification schemes and handshakes, expected serial numbers or any device instance specific mechanism that defeats a swap of IC. These safety mechanisms are usually required to enforce data integrity and protection at the hardware level, Such as is DRM technologies and the TPM (trusted platform module)
  • Data dependance : some IC may have stored information that could be lost after failure and cannot be recovered in a swap operation. Ex: bad sector list (user&manufacturer) in HDD

Post processing of PCB for MLCC, MLCI, Resistors and Diodes recovery

Of these three categories of passive components, MLCC are the most valuable because of Palladium and Silver content. Palladium is usually found in high-end, entreprise grade products MLCC, while lower grade MLCC electrodes are nickel based. Besides these metals, The ceramic is BaTiO3, and there is remainder of solder on the pad, which is Tin,Lead and Silver based.

It could make sense then to recover MLCC separately for bulk “grading”, while disregarding other low value components. Segregation of components after unsoldering or chemical separation is a cost prohibitive process.

In small scale reycling operations done by scrappers, MLCC are recovered by driving a tool such as a screwdriver across the MLCC to mechanically separate them. Such a motion could be automated using a CNC/router like machine, and coupled with a suction device to collect the MLCC. However, force feedback may be required, and the PCB should be pinned firmly on the operating table. XY alignment of the PCB could proove useful to ease the motion of the chisel along an axis, so as to make contact perpendicularly to the pads. However, the sheer force applied could be sufficient to dislodge the MLCC at any angle. Such a process could be easier and less energy intensive to implement than a heat based process. Fundamentally, the machine would operate as a reverse chip shooter, but with far simpler controls and no IC/component feeding induced complexity.

Identification of MLCC is based on the component ID close to the SMD MLCC, usually written in the silkscreen layer. The closest tag to the component is usually the tag linked to the component, It is not guaranteed that the tag is always positionned relatively (up,down,left or right to the component) the same way for all components. A PCB manifest as mentionned earlier could make the CNC tool positioning process straightforward, without having to resort to the silkscreen ID.

Such a method could allow separation to a component type-level granularity, allowing subsequent chemical processes to be performed with better yield and less noxious leachate characteristics.

Usually PCBs are shredded for gold extraction at this point. One process is the acid bath. They would still have a large population of smaller SMD resistors attached to the board.

SMD Separation : indiscriminate methods.

These methods produce SMD lower grade bulk material, as all passive components are mixed together

Chemical separation of SMD is one process that can provide separation, but would in many cases damage the SMDs.

Another method would be dry separation by reflow heat.

The other issue with this SMD separation method is that the collected PCB boards are typically in a high entropy state, sitting one on each other, with reduced path for hot air flow. ‘A shake and bake’ approach could be tried to make the SMD dust fall into the collector, With the PCBs being hold by a grate. This method would be energetically costlier than chemical separation.

Whether these small SMD components (diodes, capacitors, resistors) could be sold as “bulk SMD dust” after recovery in a dry process involving heat without sorting by resistance or capacitor value, but eventually by footprint on a shaking inclined separator remains to be seen, as for today, it does not seem to be commercially viable, nor energetically viable, except maybe for large 1206 or 1208 components.

As for the rest of the SMD ‘sand’, it would mainly be composed of nickel, solder (tin and lead and silver), alumina and ceramic substrate, and thin metal films. Some components such as diodes have an appreciable amount of plastic content, ideally they would be separated.

To sum up, We envision three grades of SMD bulk.

Grade 1 : MLCC, high palladium content (sources from old boards before 2001 and entreprise boards)

Grade 1A : MLCC, low palladium, high nickel content

Grade 2 : SMD inductors (due to copper and ferrite content)

Grade 3 : SMD resistors

Grade 4 : Diodes, BJT, FETs

However, There is the case of larger SMD components, mainly in power electronics. These should get separate treatment, However some of these are quite critical and accumulate damage such as MOSFETs.

Large SMD resistors, inductors, diodes, BJT and FETs are however simple to test for obvious failure.

Separate treatment from the printed circuit board using a dry method could also improve the complementary process of the printed circuit board treatment and yield a lower heavy metal concentration in the Leachate, due to solder pads being stripped by the airflow, if under sufficient flow velocity.

Refurbished IC and SMD industry re-use.

The main issue of component re-use would be a degradation in final product quality. How this overall industry degradation of quality would be assessed is the main point. An analysis of component robustness should be done as said previously based on testing and customer feedback. As for passive components, their failure modes are known and simple testing may give an assessment of component state such as a simple LCR test, and specialised bench test for BJT and FETs.

Re-use of these components would however need to be regulated firmly for mission critical, automative, aerospace, military, healthcare, and entreprise products. (that is, not tolerated) As it would be on the other hand encouraged for consumer electronics, but not to a point that the degradation severely undermines device predicted lifespan. We have to remember that complex electronics may have single points of failure that render the whole device unusable, a single re-used component could provoke device loss of function or severe degradation. Moreover, this would be in contradiction of the Ethos of making devices as durable as possible, We see it has an alternative and complementary path. Note that the extensive re-use of ICs and SMD would need mature e-waste management ecosystem.

A mostly circular IC economy would also be a burden in terms of logistics cycles from production to repair and e-waste, compared to an economy of robust, high quality devices, where the device is ‘unseen’ during its use time by these logistical cycles. But, we need to remember that quite a lot of device users treat them with little respect, and even quality devices may be subjected to catastrophic failure and degradation due to user negligence.

A probable solution would be a mix of the two approaches, high quality release to market devices, and lower grade non mission critical devices using refurbished IC and components.

Resource recovery for non salveageble components and PCB

These processes are quite mature, but involve large industrial machines and chemical processes. The main goal here is precious and heavy metal recovery, mainly Cu,Ag,Au,Sn,Pb.

To understand precisely the challenge, we have to dwelve into the manufacturing process of printed circuit boards. Commonly used PCB are fibre glass laminates (FR4) cured with an epoxy resin with flame retardant properties. Multilayer PCB use core FR4 and prepeg, which is more or less the same as the core, only that it is not cured beforehand but after the sandwiching process, between these layers, lie copper fills and traces. High density electronics use multiple layer PCBs, up to 8 layers.

It is important to remember that there are a variety of PCB core materials besides FR4. Some designs need a precise control of the dielectric strength, mainly designs that operate in the RF spectrum, where impedance control and loss control of traces is critical.

These alternative substrates should ideally be subjected to identification for a tailored recovery process. What makes the issue quite complex is that precise composition of these cores are trade secrets. We recommend a proper evaluation using modern chemical analysis labs (GC/MS etc) from samples of major suppliers.

Besides core, prepeg, copper layers, We have pads and vias as connection components, solder mask, silkscreen, conformal coating.

For each of these layers, the following table lists the most common compounds used to make them, We will discuss the challenges associated with recycling on that basis

Silkscreen

Conformal coating

Solder mask

conductive layer

PCB core

PCB prepeg

Pads / Vias

Due to the layered nature of the PCB assembly and internalization of the copper traces and planes, it is critical that the chemical process has access to the layer. Processing the PCB in small shredded chunks makes the exposed surface larger. The chemical bath has to be able to disolve the resins that give rigidity and insulation to the PCB, if one wishes to recover copper or other metals from internal layers. From this ascertainment, some PCB cores are made from environmentally friendly laminates. The following is an example of technology. https://www.jivamaterials.com/

Once the process is done, liquid processes end with leachate of plastic matter, organic compounds, and trace of metals. The liquid phase needs to be subject to processes such as precipitation, coagulation, filtration, adsorption, AOP… The remaining sludge is the noxious concentrate. Further processing are discussed in the literature, but further and further processing by heat / plasma ovens, pyrolysis are energy intensive and are not deemed possible until cheap and ubiquitous energy sources are available. For now, neutralisation and embedding in a non reactive form such as vitrification would be preferable, but these methods are still energy intensive. These could be stored in stable bedrock formations as other dangerous waste such as radioactive waste, asbestos, etc…

<section needs more in depth analysis>

Refined toxicity : E-waste Leachate.

Leachate is the liquid phase, highly toxic byproduct mix of organic chemicals and inorganic (heavy metals and precious metals) resulting from water seeping on e-waste, creating contaminated water that flows into e-waste landfills down to aquifers or other hydrologic systems. The following paper gives a detailed analysis of leachates in section 6 :

Electronic waste and their leachates impact on human health and environment: Global ecological threat and management

https://www.sciencedirect.com/science/article/pii/S2352186421006970

In e-waste management, leaching processes also denotes the liquid phase used to recover precious (Ag, Au) and toxic metals, using specific reagents to reduce the metals, the field of study being hydrometallurgy. The main issue with these techniques is that help in recovering metals, but there is still the issue of organic compounds.

Batteries : Re-use, Refurbish, Recycle ?

Prior to tacking the subject, we need a precise inventory of battery technology by form factor and chemistry.

A chemical battery is an electrochemical device that stores energy through a potential redox reaction. between anode and cathode materials.

Major Classes of batteries by chemistry :

  • Lead based
  • Nickel based (Cd or MH)
  • Lithium based

The field of Lead based battery chemistry recycling is mature, as well as the logistic flows that deal with these batteries, These batteries are quite stable, and their construction is simple. The main danger of these batteries recyling wise is the presence of Lead that is a heavy metal, with large polluting potential, and is hazardous for workers that are responsible of creating new electrodes from molten Lead (due to the risk of Lead vapour absorption). A lead based battery is nevertheless dangerous if shorted because of high energy content, and will lead to H2 gas emission, temperature rise, and high fault currents. Due to the lower energy density, Lead based chemistries are used for small energy buffers in vehicles (automative lead acid batteries, VRLA), Also used in large ships, and stationary battery banks for commercial and industry operations (UPS) and as a storage medium for residential solar or wind power.

-Nickel based (Cd or MH)

Mainly found in standard battery packages such as A, AA, AAA , 9V. Known as Nickel Metal Hydride.

Found to power small electronic devices, and power tools. A recent trend of switching from NiMH technology to Lithium cell packs where it is not warranted (as a planned obscolescence strategy) is worrying. An assessment of the Nickel market should be done to better forecast the trends of the NiMH battery business. NiMH cells are simple to build and recycle, they suffer from lower energy density. They have a rather large self discharge rate. They may suffer from leaks. Construction quality, real vs announced capacity, number of cycles durability varies a lot between manufacturers. counterfeit batteries are also an issue. NiMH does not pose the same risk of catastrophic failure as Li-Ion or Li-Po

Old Nickel based designs (Nickel Cadmium) are obsolete but may still appear frequently in recycling operations because of stockpiling induced inertia. Cadmium is an extremely toxic heavy metal. Processes should give a special attention to regulatory information, for instance, on power tools packs, where the use of this chemistry is usually specified. Neural network / OCR based method should scan all battery bank text (down to individual cell text) for hints on used battery chemistry and other parameters, to isolate batteries with the following content : Hg,Cd,Pb. Vision based technologies should also be put to contribution as some cells do not have any markings.

Litihum based batteries, compendium of chemical components and elements.

Prevalence : Ubiquitous with several subtypes for anode and cathode materials.

Graphite Anode, in lithiated state LiC6

Cathode : LiCoO2 or LiFeP04, LiMn2O4 spinel, or Li2MnO3– Al2O3 coatings.

Electrolytes : Organic Solvents plus LiPF6 or LiBF4 or LiClO4

Collectors : Al, Cu, Ni, Ti

Protective and insulating layers, and additives : polypropylene PP, polyethylene PE, Polyvinylidene Fluoride (PVDF), Ethylene Carbonate, Propylene Carbonate.

Physical Design :

To provide a larger surface of electrochemical reaction, multilayer topologies or rolled layers topologies create the bulk of the litihium battery. A full dismantling operation – known as “direct method” (tear down to individual components) to recover individual chemical compounds is impractical with current technology, and a nascent area of research, as there is a lack of viable automated methods.

Indirect methods, such as hydrometallurgy shred the whole passivated battery pack (comprised of structural elements down to individual cells) and then separate plastic from copper/aluminium. the liquid phase contains Li,Mn,Co,C elements. Filter presses then concentrate that particulate into a solid form. This form is known as “black mass”. Formulating new battery cells from this black mass requires chemical processing.

Example of an indirect method processing plant using passivation, Hydrometallurgy and Shredding.

The main problem is the number and diversity of layers comprising the LIB, The various arrangement and geometries of LIB, The risk of uncontrolled reaction in case of Anode to Cathode contact on a non passivated LIB. Any conductive dismantling tool has the risk of shorting anode to cathode, and non conductive tools still create a risk of pinching anode to cathode, or create tears in the insulating dielectric layers. Our current assessment is that direct recovery methods are not currently economically viable due to a large gap in processing time per battery cell compared to pyrometallurgic and hydrometallurgic methods, including black mass processing methods.

However, examining in detail how it could be done could make the field of direct recovery a viable option in the medium term.

To gain knowledge into such a direct recovery process, one should be familiar with the most common industrial methods of LIB production, and analyze the process in reverse (from final product to crude source material).

The battery pack

The most common form factors of larger LIB units, known as battery packs, are comprised of dozens to hundreds of smaller LIB canister or brick Cells in a series/parallel arrangement. Larger LIB units also provide structural protection for the cells inside, and protection circuitry / BMS boards, as well as cell interconnects.

  1. Brick or Module Form Factor:
    • Automotive Industry: In electric vehicles (EVs) and hybrid electric vehicles (HEVs), battery packs are often organized into brick or module form factors*. These rectangular or block-shaped modules contain multiple canister-type cells and can be stacked together to form the complete battery pack. This form factor allows for easy scalability and assembly while optimizing the use of available space within the vehicle chassis.
  2. Prismatic Form Factor:
    • Consumer Electronics: Some larger consumer electronic devices, such as laptops and digital cameras, use prismatic battery packs. These packs contain prismatic (rectangular or square) canister-type cells that are integrated into a single housing. The prismatic form factor allows for a compact and uniform shape, making it suitable for thin and sleek devices.
  3. Cylindrical Array Form Factor:
    • Power Tools and Portable Equipment: Industries that require portable power tools and equipment often use cylindrical arrays of canister-type cells. These arrays are designed to fit within the tool’s handle or housing and are configured to provide the necessary voltage and capacity for the device. Cylindrical arrays are also used in emergency backup power systems.
  4. Rack-Mounted Form Factor:
    • Telecommunications: The telecommunications industry frequently uses rack-mounted battery systems, where canister-type cells are arranged in a standard rack-mountable enclosure. These battery systems provide backup power for telecommunication infrastructure, data centers, and other critical applications. The cells are often connected in series and parallel to meet specific voltage and capacity requirements.
  5. Containerized Form Factor:
    • Utility-Scale Energy Storage: In utility-scale energy storage applications, such as grid stabilization and renewable energy integration, canister-type cells are often arranged in large, containerized battery systems**. These systems use standard shipping containers to house and organize the cells, making them easy to transport and install at power plants and substations.
  6. Custom Form Factors:
    • Aerospace, Defense, and Specialty Applications: Some industries, such as aerospace and defense, may require custom form factors for battery packs to meet specific performance and space constraints. These custom packs are designed to fit the unique requirements of aircraft, spacecraft, military equipment, and other specialized applications.

Note : The largest mobile form factors, typical in the EV industry, may contain cells aranged in modules “supercells”. These modules arrangments form together the whole battery pack.

Containerized form factor banks are the largest form factor and are used as energy buffers, in the industry or in renewable energy production operations. Those are typically maintained on-site. Recycling of such large arrays is typically done on decomissioning (ex: end of life due to excess number of cycles). Whether the decomissioning and dismantling operations are performed on the customer site or at the recycling center depends on road worthiness of the containerized array, as well as local regulations.

Any LIB cell direct recyling process needs first to devise an access method to the LIB cells for the most common form factors 1,2,3, above as a priority. It is challenging because most designs use gaskets, glueing, plastic welding to form hermetical seals. Their purpose is to prevent humidity and water ingress and reinforce battery pack safety. The downside is that these packages are less dismantling friendly.

Automotive battery packs are still a maturing field with potential packaging evolutions, fast chemical technology evolutions and large technological variance between manufacturers. Form factor standardization will allow an integrated management of EV batteries by single facilities, as for now, the burden of reconditionning packs is mainly tackled by the EV manufacturer or a specific contractor, but will be more and more the task of the auto mechanic workshop. It should be known that EV battery packs contain a whole lot of circuitry and BMS components, as well as structural components. Most of the time, such a pack will be subjected to repair & re-use ,with BMS component repairs, and diagnostic/replacement of weak cells or super-cells (modules) to rejuvenate the battery. Under only end-of-life circumstances (most cells are underperforming) or structural damage will the pack be gutted.

This is an assessment of ease of access and ease of removal to LIB Cells for each industry field :

1 – Automotive : Variable, moderate to hard access and removal, as there is a tradeoff between maintenance and rejuvenation requirements that warrant easy cell parameter measurement and replacement, and on the other hand safety requirements for batteries in crash situations, that have had manufacturers enclose the cells in protective structural materials. ex: Tesla 4680 battery foam. There are also packs with very low cell individual volume to battery pack volume. (several hundred cells per pack)

2 – IT, Computers / Laptops : Hard, usually sealed to prevent tampering and humidity/water ingress protection, number of cells per pack low. (usually to get into the 20V range) 6C. Contains a small PCB called the BMS that performs battery health checks and blocks charging or discharging in certain circumstances. Access to the cells without damaging the plastic enclosure usually requires prying it with a cutter or screwdriver, the enclosure is made of hard plastic, probably ABS. This process is dangerous as it may damage the cells inside. Destructive but fast processes to gain access to cell array such as crushing or cutting the enclosure are even more dangerous, as there is very little spacing between the cells and the walls of the plastic enclosure. The following video shows the manual, non destructive teardown process.

https://www.youtube.com/watch?v=ZnA1zXRxz1M

Note however, that refurbishment and repair techniques exist to re-charge over-discharged batteries bypassing the BMS (that may lock charging), by applying current through laterally drilled small holes on each side of the enclosure, (it helps if the cell spatial configuration is known, which is trivial in single line packs) thus accessing the pack terminals. This method is however dangerous and voids all warranties, and the pack should be resealed properly after the operation. Charging that way should be done under close supervision. https://www.youtube.com/watch?v=I1hjufLz8Fo

If non destructive access to the cells is achieved, and the cells have to be swapped (rather than purchasing the whole battery pack, for whatever reason) BMS intervention may be required, such as a state reset.

3 – Power Tools battery packs : variable,easy for screw based designs, hard or impossible for sealed designs. number of cells per pack low, usually to get into the 12V to 20V range. These battery enclosures made of sturdy ABS plastic may prove to have economical value and reused as this for screwed designs, if found with only minor aesthetical damage, to create new battery packs. Repair is usually straightforward for screwed designs. Note that a protection fuse (overcurrent and or thermal) may be present inside the pack and refurbishment operations should take into account that critical component health, never bypassing it. Recent designs may use BMS circuitry. These may or may not support a battery cell refurbishment. (a reset BMS state may also have to be performed on a full cell swap)

4 – Telco, IT or residential battery packs, for UPS or renewable energy operations. usually LiFePO4 based : easy access to the terminals, medium difficulty for cell removal. Full dismantling time : average, as the battery cells are large, and screwed from cell to cell using copper busbars. (high ratio of cell volume to battery volume). Contains additional high value circuitry such as BMS, temperature sensors, copper fuses and busbars, overcurrent protection devices, all stored on a 19′ metal chassis. Most of the time such a device will be refurbished, after identification of the root cause of the failure or decrease of performance. Dismantling will be done if the device does not follows regulatory requirements, or if the defect is on a part that cannot be repaired or replaced, like heavy BMS damage and absence of manufacturer available replacement parts.

5 – Gadgets and small electronics, vapes, DECT phones, LED torches, specific handeld audio video equipment, small medical consumer devices, etc. Access to the cell is usually hard due to the large diversity of devices, tamper protection and planned obsolescence schemes, and usually not economically rewarding. The switch from an accessible Alcaline, NiMh or Li (AA , AAA) battery compartment to a sealed device with a USB charging converter is a worrisome development for the sustainable use of a device, mainly due to the following factors : micro USB charging port failure (frequent occurence), charging step down converter failure, also frequent. such devices are heavily cost optimized, and the charging portion is reduced to the most basic expression to guarantee safety, and not always. And finally comes LIB cell failure. Replacing the cell, usually in pouch form require the purchase of the same cell configuration, same form factor, and same terminal connector. In a recovery operation, these devices would be typically subjected to indirect methods (sent to the shredder, into the LIB process rather than the e-waste (non LIB) process, because of LIB safety risks and unconvienience of LIB removal.

The danger of ‘Hidden cells’

Finally, we have the particular case of large devices employing relatively small cells, that may not be detected during a screening process. These cells are mostly added to equipment to perform operations under absence of mains power. Common “hidden cells’, to name a few, are RTC (real time clock) batteries using CR2032 cells, and BBUs (battery backup units) using canister cells, used to keep the state of the memory of disk array caches in servers or storage arrays. These cells may find their ways into processes that have nothing to do with batteries and pose risks to life and machinery.

High level refurbishment

It may be sound for high value equipment presenting with a single or limited number of defective cells inside, to replace that cell, peform a device safety check-up, and resell the equipment, or return it to the manufacturer for further action. However, this requires :

-Access to both terminals of each cell while they are inside the device, to check voltage/make discharge/ charge tests.

-Adequate time resources, proportional to equipment value. (ex, variable timer counter to 0)

-Adequate troubleshooting and repair skills, safety and quality culture.

-Access to an equivalent replacement cell or super-cell (same form factor, capacity, terminal types, chemistry). This in turns needs a precise management of in-stock items and adequate warehousing.

-When creating cell arrays from scratch using canister cells or replacing defective cells, note that terminal strips linking the cells together typically use spot welding techniques, thus a spot welder on the work bench is required.

LIB Cell / Battery pack health evaluation

Whether for high level refurbishment of large battery packs, or for salvaging of decently healthy individual LIB cells, fast and accurate measurement methods and charge/discharge monitoring methods are needed to perform this evaluation directly. There is also the possibility of indirect evaluation through BMS inquiry through a serial bus such as I2C or SPI, or through a display interface for large packs, that may indicate which cells are good and which are defective. We will first dwelve into the most commonly used direct measurement methods.

Direct Measurement Methods

Open circuit voltage : Is a fast SoC evaluation method, requiring access to both positive and negative terminals of the cell. Access may prove difficult in some assemblies, or require specific instrument probes (not the standard multimeter “spike” probe). Measurements need to be temperature compensated and should be done once the devices are at thermal equilibrium inside the facility. A very low voltage reading indicates an overly discharged cell that is probably unhealthy, or has severly reduced capacity. A negligible voltage, close to 0, indicates electrode or terminal damage. The issue with voltage readings in the nominal range, is that they give a broad indication of state of charge (SoC) not state of health (SoH), SoH being the ratio of degraded capacity to nominal (manufacturer) capacity. Advantage of the method, fast, does not require specific equipment, allows elimination of severly damaged or discharged cells from further evaluation.

DC resistance measurement : This method provides additional data to help in the evaluation of SoH. It requires however precision bridges (miliohm range) and is preferable for high capacity LIB cells (Ah range ?). The cell is discharged for a short time on a calibrated load. The method is straightforward as it use Kirchkoff law using OCV, current, and voltage under load. However, performing this measurement on cells of various SoC is poor methodology as DC resistance is not entirely independent of SoC, Unless a compensation model to account for SoC is used, <It is preferable to perform that measurement after a full battery charge and after cooldown -check>. However this may give a SoH evaluation after a single charge, and spare the battery from a 3 phase cycle. “charge / discharge / charge”

Full cycle measurement. This method charges the battery, discharges it, and charges it again to 50%. it performs evaluation using methods such as Coulomb counting, Discharge current metering under various load conditions, temperature profiling, and repeated DC resistance measurement during both discharge and charge. It gives the best evaluation of SoH. It is expensive in terms of energy and time. Finally, one should remember that once the refurbishment process is done, the cell, battery pack or device containing battery packs could be stored in a warehouse for an extended period. Industry standards advise to store LIBs between 40% to 50% SoC. One then should take into account self discharge time constants from chemistry processes at 1% to 2% per month, and those coming from possible standby power draw (if the battery comes with supervisory components, such as a BMS, expect self discharge of up to 3% per month)

A reduced SoC for warehousing also increases storage safety, by reducing ignition and fire risks.

It follows that the mean challenge of SoH determination is to perform an evaluation during either a biphasic cycle ” discharge / charge ” or better, in terms of energy and time expended, under a single step “charge up to 50%”. If the cell already presents with a high OCV usually expected for SoC over 50%, it would probably require a limited amplitude biphasic cycle (discharge/charge) to perform adequate SoH evaluation.

Another issue with SoH determination is access to nominal capacity data. This is the main data required to perform SoH estimation. It could require tracing the cell manufacturer from labels and marks, to access datasheet that may give hints on nominal capacity, if the nominal capacity is not clearly marked on the cell, which in itself is not always a trusthworthy information due to manufacturer over-reporting.

What may be known however are upper chemical constraints of capacity for a given volume,weight and battery chemistry, as well as capacity data from reputable manufacturers, these should be used in absence of data. Cell density may also play a role in nominal capacity estimation, if the manufacturer adds inert fillers or empty spaces, cell weight will be lower as well as density for fixed form factors. This metric should be taken into account.

Finally Manufacturer process variability, unclear chemical technology labelling, May affect SoH determination algorithms. Some LIB cells are optimized for large currents, such as those used for drones.

As for the IC market, proper labelling of cells, in particular with high data density methods such as QRcodes, will help automated process by getting data sheet and the nominal data therein.

Finally, there is the issue of cell counterfeiting.

Currently, the accepted threshold in the LIB industry for classification of a cell as “dead” is a SoH of less than 0.8 Performing more cycles on a battery with a SoH of less than 0.8 may lead to abrupt decrease of capacity. https://www.hindawi.com/journals/ijer/2023/4297545/

Direct evaluation for large sets of cells in an industrial setting is however a powerful method to build a data driven system of cell evaluation, and brings insight on quality and safety trends of the LIB industry.

Indirect Measurement

Indirect measurement methods use the registered health information from the BMS (battery management system). Depending on the BMS however, the granularity of the SoH assessment may not be down to the individual cell. In any case a granular assessment of inidividual cells provide the fastest method for SoH down to the cell level. Access to the BMS data is usually done through one of two methods :

Interfacing through probing : an analyzer probe connects to the BMS, usually through standard I2C or SPI protocols. The software should then detect the BMS IC technology to properly negotiate the high level protocol and retrieve meaningful data, this BMS variability is one issue.

Human Device interface querying : The device provides an interface, such a keypad and a screen that allows access to cell health data.

It should be noted however that not all BMS provide high level assessment of SoH, and may simply provide voltage and temperature data of individual cells. Some BMS may provide historisation. At least, BMS should provide the total number of cycles and date of manufacture/ replacement for the battery bank.

The BMS (Battery management system)

BMS is such an important topic in the field of battery reconditionning and troubleshooting that it should be examined under more detail.

What are the functions of a BMS ?

  1. Cell Monitoring and Balancing:
    • Voltage Monitoring: The BMS continuously monitors the voltage of each individual cell within the battery pack. This helps ensure that cells remain within a safe voltage range during charging and discharging.
    • Cell Balancing: In multi-cell battery packs, cells can have slight variations in capacity and voltage. The BMS can perform cell balancing by redistributing charge among cells to ensure that they have similar state-of-charge (SoC). This enhances the overall pack capacity and prolongs its life.
  2. Overcharge and Overdischarge Protection:
    • Overcharge Protection: The BMS prevents individual cells or the entire battery pack from being overcharged, which can lead to cell damage, reduced capacity, and safety hazards.
    • Overdischarge Protection: It also prevents overdischarging, ensuring that cells do not drop below a certain voltage threshold. Overdischarging can damage cells and lead to capacity loss.
  3. Temperature Monitoring and Control:
    • Temperature Sensing: The BMS monitors the temperature of the battery cells during charging and discharging. Excessive heat can be a sign of internal problems or thermal runaway. The BMS can take corrective actions if the temperature rises to unsafe levels.
    • Thermal Management: Some BMS systems control thermal management components like fans, heaters, or cooling systems to maintain the battery within the optimal temperature range.
  4. State-of-Charge (SoC) Estimation:
    • The BMS estimates the SoC of the battery pack based on voltage, current, and temperature data. Accurate SoC estimation is crucial for providing reliable battery status information to users.
  5. Cell Voltage and Capacity Reporting:
    • The BMS provides information on the voltage, capacity, and health of individual cells and the overall battery pack. This data helps users and system controllers make informed decisions.
  6. Short Circuit Protection:
    • In the event of a short circuit within the battery pack or external circuitry, the BMS can disconnect the battery pack to prevent excessive current flow and overheating.
  7. Communication and Data Logging:
    • Many BMS systems include communication interfaces (e.g., CAN, RS-485, or SMBus) to relay data to external devices, controllers, or user interfaces. This allows for real-time monitoring and data logging.
    • Data logging is essential for tracking battery performance over time, identifying trends, and detecting anomalies.
  8. Fault Detection and Alarms:
    • The BMS detects faults, anomalies, and safety-critical events. It can trigger alarms or safety measures, such as disconnecting the battery pack, to prevent or mitigate potential issues.
  9. Control of Charging and Discharging:
    • The BMS can control the charging and discharging processes to optimize performance and extend battery life. This may include regulating charging currents, charge termination criteria, and discharge limits.
  10. Safety Features:
    • BMS systems often include additional safety features, such as overcurrent protection, fault tolerance, and redundant circuitry, to enhance the overall safety of the battery pack.
  11. User Interface: In some applications, BMS systems provide a user interface or display for monitoring battery status, configuring settings, and displaying alerts or warnings.

It should be known that charging the cell array is not typically a duty of the BMS, but one of the charger. The charger may adapat its charging profile depending on BMS data, as well as the BMS may interrupt charge – by disconnecting the battery from the charger, if it detects an unsafe charging operation, like overcharging or excess current charging during the bulk phase. This is because the BMS is primarily logic, supervision and low to medium current circuitry for cell balancing, whereas the charging circuitry may have to deal with high currents – and require larger switching MOSFETs, capacitors and inductors. These are usually outside the battery pack, in the case of sealed devices.

BMS form factors.

With the exception of systems with specific needs, system integrators will usually implement ready to use BMS modules that constrain cell array voltage and/or the number of individual cells monitored or subject to cell balancing. Some BMS are “multi chemistry” and can tailor their functions to the various declensions of LIB technology, such as LiFePO4 or Li-Po, and Nickel or SLA batteries. Developping a BMS from scratch is a design intensive process.

The question then being, is recovering a BMS module in working order, when the cell themselves are mostly in a bad SoH, worth it ? Such an operation would require opening the battery case, which can be troublesome for sealed designs.

BMS price tags vary with the industry, and power tool, e-bike, drone BMS and laptop BMS are usually different modules.

A fast search on Chinese vendor websites give price tags for new BMS for high current applications and various series configurations, such as 2S up to 5S, going from 0.5 EUR/unit up to 2 EUR/unit.

Laptop BMS usually have an elongated “strip-like” form factor, and are manufacturer, or even device specific. Since these BMS may use different IC, the high level supervisory protocol may be different from laptop manufacturer to laptop manufacturer, and even different from device to device. Such parts are usually harder to source, as their part supply is restricted to manufacturers or subcontractors in the battery pack making business. These BMS may show better economic viability in recovery operations.

How do these BMS systems interface with the device they power under normal operation and how it is possible to query them with analyzers, and under which low-level protocols ?

Commonly used industry standard protocols are I2C and SMBus. I2C is a low level IC to IC communication protocol, non power system specific, while SMBus is a standardized protocol for power applications on top of I2C, frequently used in the computer field. Manufacturers may use proprietary protocols or protocols that deviate from the standardized ones.

This make querying the BMS a potentially challenging task.

Some BMS designs used in other industries are “dumb” and do not expose a communication interface, and are mostly restricted to safeguarding the cell array, and promoting a longer life through cell balancing. Such a BMS could simply interrupt charge, or discharge, in non nominal conditions.

Monitoring the OCV of the pack through its external terminals could show 0V simply because the BMS disconnects the battery pack from the outside terminals using solid state components, such as MOSFETs. Having interfaceable BMS in most types of battery packs would be beneficial for a proper assessment of SoH.

Case study, laptop battery BMS access : A common example of probed BMS access are laptop / drone batteries.

The electronics repair industry has made progress in the development of tools that allow out-of-device BMS querying, Those should be pursued further into BMS query automation.

Conclusion about cell extraction

We can already see that access and removal off the basic component of a battery pack, the LIB, is a daunting task, due to the variety of the devices complexity and LIB attachement methods, all requiring specific dismantling methods. This variety makes it hardly automatable at present time.

Manual processing of cell extraction would be an intermediate skill requiring a good safety culture, as dismantling could be done on banks still possessing a non negligible amount of charge. Such cells should be discharged just after extraction, and shorts should be avoided in future cell storage by covering the electrodes with insulation.

There maybe an advantage in direct processes approaches and LIB cell recovery, As in some defective battery packs, the cause of failure maybe limited to a single defective cell (most prevalent in series connections), or the failure of the BMS. In that case, it may be sound to perform basic cell integrity/health checks, that is checking it does not present leaks and abnormal shape, plus check terminal voltage.

A positive assessment would reroute the cell so more checks can be done, before an eventual proper discharging/charging cycle to assess % of nominal capacity, and then towards the refurbished cell market. More research should be done, and particularly about the potential risk of increase of incidents such as uncontrolled fires of explosions that such circular re-use may entail.

The LIB cell

The basic unit component of a LIB is LiB cell. LIB cell can be planar in nature (as superposed layer stacks, sealed in a brick or pouch form) or cylindrical (canister type) as the superposed layer stacks are rolled into a cylinder shape, and sealed into a metal tube or plastic hermetic insulation, with electrodes at the cylinder opposite ends. Sometimes a small circular PCB BMS is added on one electrode to perform individual cell protection tasks.

Compendium of the most common standardized LIB cells of the canister type :

  1. 18650: The 18650 cell is one of the most widely used and recognizable cylindrical Li-ion cells. It has a diameter of approximately 18 mm (0.71 inches) and a length of approximately 65 mm (2.56 inches). 18650 cells are commonly used in laptops, flashlights, and many other portable electronic devices.
  2. 21700: The 21700 cell is a larger cylindrical Li-ion cell with a diameter of approximately 21 mm (0.83 inches) and a length of approximately 70 mm (2.76 inches). It offers higher capacity and power output compared to the 18650 cell. 21700 cells are used in electric vehicles (EVs), energy storage systems, and high-performance applications.
  3. 26650: The 26650 cell is larger than both the 18650 and 21700 cells, with a diameter of approximately 26 mm (1.02 inches) and a length of approximately 65 mm (2.56 inches). These cells provide higher capacity and are often used in high-drain applications, such as power tools and some large flashlights.
  4. 14500: The 14500 cell is smaller than the 18650 and has a diameter of approximately 14 mm (0.55 inches) and a length of approximately 50 mm (1.97 inches). It is commonly used in small electronic devices, including some flashlights and consumer electronics.
  5. 16340 (also known as CR123A): The 16340 cell, also known as CR123A, has a diameter of approximately 16 mm (0.63 inches) and a length of approximately 34 mm (1.34 inches). These cells are used in various applications, including digital cameras, flashlights, and security devices.
  6. AA and AAA Form Factors (14500 and 10440): Some cylindrical lithium-based cells are designed to match the dimensions of standard AA (14500) and AAA (10440) alkaline batteries. These cells are often used in devices that traditionally use AA or AAA batteries but benefit from the higher energy density and longer lifespan of lithium-based cells.
  7. Sub-C Form Factor: Sub-C cells are larger than typical consumer cells and have a diameter of approximately 22 mm (0.87 inches) and a length of approximately 42 mm (1.65 inches). They are commonly used in high-power applications, including power tools and certain cordless appliances.

These are some of the standard form factors for cylindrical lithium-based battery cells. Each form factor has its unique advantages and applications, and manufacturers produce a wide range of cell capacities and chemistries within these form factors to meet the demands of various industries and devices.

The 18650 canister cell is one of the most ubiquitous form factor for canister cells. It should be known however that there are several terminal finishes : leads, flat heads, and raised button on the anode. Any low level refurbish operation would have to deal with these cell variations.

Pouch or brick form factor cells pose more challenges due to the lower standardization pressure, as these cells are often tailored for high density electronics such as smartphones, small gadgets, or any device that has a thickness constraint, or cannot accomodate the above canister type cells. Moreover, these may have specific electrode layering configurations, such that their open circuit voltage is more than the 3.4V cell voltage. It is probable that these form factors will be resistant to direct recyling methods for a longer time than canister based cells.

An example of a manual teardown process is shown here, which helps to visualize the challenges of transposing this to an automated process :

Manual teardown of a 18650 Cell to its constituents.

Challenge for this step : Separate treatment of planar cells and canister cells. Feeding strategy (easier for cylinder cells due to geometry conformity and rolling behaviour in a feeder).

Considering first the canister type 18650 standard battery : The outermost layer of a canister type battery is plastic foliage for insulation and to provide cell nominal information, manufacturer references, branding etc.. This plastic layer is removed by running a cutting tool or laser axially. to reveal the crude canister.

The canister has to be opened. The canister is sealed at each end to form the positive and negative leads of the lithium cell. Note that the anode usually is a raised button electrode type plug with a concentric insulator that seals the canister while providing electrical insulation between the cathode body (whole canister) and the anode raised button.

Opening of the canister : cathode end is cut radially with a ceramic guillotine like blade. anode end is cur radially with a ceramic guillotine like blade. At this step, the canister is open on both ends.

Separation of the electrode layers from the canister : It would be preferable to cleanly separate the canister from the rolled electrode assembly layers. One method would be to drive a plunger into the canister to push the roll into a separate bin or better, a conveyor (to prevent inter cell shorts)

The empty canisters would be recovered at the end of this line.

Unrolling of the cell layers

This process requires precision as it requires the pinning of the roll tab made of separator material, which does not adhere strongly to the roll and helps in unwinding. Once this end is pinned, a delicate radial travel of the roll to perform the unrolling operation would be executed using a mechanical roll. Also, there maybe chirality issues with rolls going CW or CCW direction. Depending on the manufacturer, the plastic tab at the top of the roll (sometimes with a yellow kapton like part), may shows axial discontinuities (attachment trough a limited portion of the roll) which can lead to breaks during the unwinding process. After the tab is secured and unwinding progresses, the core of the battery, with the porous membrane, copper or Al electrodes, metal oxides, Litihium and graphite is revealed. Note that there is possibly a small glued region between the plastic tab and the body, as a little unrolling resistance is seen in the above video at the transition point. The main issues with this step seems to be :

  • the precise force feedback required to perform unrolling.
  • CW/CCW issues for unrolling motion.
  • Springiness and form memory of the Jelly roll which would require pinning the unrolled cells at both ends.
  • Unrolling to the very end without unpinning the end. For that effect, a double cylindrical carriage may travel to perform the unwinding. When a vertical photodetector sees the table instead of the roll just past the outward cylinder , it means that the roll has reached its end, and hopefully is catched by the inward cylinder. This Jelly roll extremity – due to the springiness arising from it being rolled tight – will rise above the inward roller cyclinder, and aid in the following steps.

After unrolling would come peeling of the membrane layer, cathode (Li oxides) on a copper foil, membrane layer, cathode(Li graphite) on an aluminum foil. Peeling is also a delicate process, rendered difficult by the fact that both roll ends are pinned. Pinning should be done such as there is a small interval between the pinning surface and the roll. That way subsequent peeling steps will be easier.

Moreover, the layer separation process gives rise to powdery anode and cathode contents, that do not stay bound to the metallic layers.

Possible peeling strategies would include electrostatic/triboelectric separation for the membrane/separator layers (which are plastic polymers in nature), and/or suction devices for the foils.

Given the small width (how many µm ?) of the separator membrane and its low weight and porosity, this membrane could be aspirated by the suction process, while keeping the foil underneath unperturbed.

For the electrode foil, apprehension using a suction cup would be used, so it can be placed into cathode and anode bins.

Aspiration or apprehension should be attempted on the 4 corners of the unrolled cell, which is a zone where layers are mostly already separated (as one would flip a book page)

Litihum metal oxide dust portion that does not stick to the electrode could be vaccuumed.

Direct process : The fire hazard risk

The fire hazard risk is higher in a direct process with granular recovery of components, mainly because of passivation that happens far down into the recycling process, and no passivation for re-use or refurbish operations, with on the contrary, an increase in total chemical energy at the end of the refurbish process. (up to 40 – 50% SoC)

The eventuality of storage of non passivated cells in high density stacks for resell as refurbished elements, could create energetic “hot-spots” where a single cell failure would compromise (start a fire) the whole stack. The risk mainly coming from : terminal to terminal short by cells forming a discharge loop or extraneous materials such as strips or cables shorting terminals, and crushing forces from total storage stack excess weight, compromising cell integrity. Finally, there are requirements for a controlled HVAC environment to guarantee safe temperature ranges for batteries at all time. Excessive temperature excursions damage the battery and increase the risk of fire.

Finally, human operator error may be the cause of ignition, particularly in a mostly human process such as reuse & refurbish processes found in the direct method. The most common causes of ignition would be :

improper handling of cell arrays – shorts between strips closing the circuit, note that a cell array is often a non-rigid assembly and shorts are possible due to relative cell movement during handling. Also, array construction is sometimes questionable, with very short clearance between terminals whose connection would induce a short circuit.

improper handling of cell arrays – short between exposed copper of cables connected to the array terminals.

Finally, improper terminal insulation prior to storage.

As for non human mediated risks, there is spontaneous ignition due to cell damage or improper construction. Since quality control of the input feed is a near impossible task, that risk cannot be negated in recycling operations.

Much research is needed on root cause analysis of LIB fire ignition mechanisms, and additional methods proactive measures that can detect cell susceptibility to catastrophic failure (such as x-ray tomography, and neural network visual evaluation methods of LIB pouches and canister with compromised physical intergrity), As well as maintaining a LIB database of problematic batches from manufacturers. This requires a coordinated regulatory effort on the international level.

mitigating the risk, and adequate fire extinction systems for Lithium based fires.

Adequate extinguishing agents for multiple chemistry battery fires :

For work benches, Fast acting D type extingushing agent such as sand could be discharged onto the work area by a gravity fed mechanism – a large tank of sand emptying on the work area, triggered by an emergency button. A release stop should be present to modulate the quantity of sand used to extinguish the fire. Additional mobile class D extinguishers should be present ubiquitously to combat fires in other areas.

The warehouse is the most challenging zone, as it contains the most dense energy-wise part of the plant. There are several mitigation strategies.

-large firewalls with thermal insulation barriers + flame retardants – a lithium fire can propagate because of the thermal radiation heat transfer alone, so the fire is restricted into one area or speed of propagation is hindered.

-Firewalls are impratical on the front side of the shelf, as access and view of the merchandise is impossible. Row width should be managed carefully to limit or block row to row propagation.

-Automatic class D extinguishing systems can be challenging to install for high shelf scaffoldings.

-The most sound approach is to prevent stockpiling, and to separate the storage area from critical parts of the plant by a thermal firewall + flame retardants

Summary of the direct process and final evaluation.

Large plant footprint. Human labour intensive, with moderately high skills required due to large variety of tasks (various types of devices encountered, device state assessment, safety culture). Complex logistics. Safety risks such as fire are higher, due to certain processes operating on non passivated cells, risk of propagation. Varied products, With possibility of recovery and resell of high-value refurbished equipment.

Main Final output of direct processes (excluding EV batteries) :

Refined matter :

-Refurbished laptop batteries.

-Refurbished e-bike battery packs.

-Repaired or Refurbished – 19′ rack batteries, such as LiFePO4

-Viable LIB cells (refurbished, only for largest form factors)

-Rack Enclosures 19′, empty, sold as steel scrap or returned to the manufacturer.

-Power Tool ABS battery packs enclosures -empty (when applicable, screw types)

-Power Tool ABS battery packs – refurbished.

-BMS PCB strips

-‘Car battery like VRLA’ enclosures fitted for LIB cells, usually LiFePO4. possible refurbishment.

-Cables and busbars, connectors

-Metal strips (linking individual cells)

-Constituent matter (direct process down to cell constituents) :

-Copper foil, usually easy to separate from Li Co metal oxides

-Li Co metal oxides in powder form

-Plastic PP,PE porous membranes

-Lithiated graphite (LiC6) – may require additional steps to properly separate from the aluminum foil.

-Aluminum foil.

-Steel or aluminum canisters

-Shredded ABS – from battery enclosures such as power tools and laptops batteries

Conclusion : direct vs hydrometallurgic indirect

Full LIB cell dismantling into constitutive layers is a process that is challenging to automate, non profitable in large scale operations and in developped countries industrial implantations, where black mass refinement processes are preferable (indirect method)

Thus the core of the economic viability assessment of direct vs indirect methods lies principally in the human labor intensive part of direct methods – as of 2023 – vs the process burden of black mass conversion into new cells, whereas direct methods are able to perform granular separation of Lithiated carbon, aluminium foils, copper foils, Lithium/Cobalt oxide compounds, and plastic enclosures in a good state of integrity. It should be noted however that depending on the wear of a Litihium cell, direct methods may not recover pristine compounds (that may be used without processing into new batteries). Such processes would probably blend recovered materials from direct methods into new batches to guarantee battery performance.

Indirect process with repair & refurbish emphasis

A refined indirect method should be explored as a possible process refinement. In that case, the indirect process would be fed only small plastic (ABS) battery packs, and a majority of LIB gadgets, as well as LIB bricks. Larger structural assemblies – such as EV packs or 19′ rack packs, would undergo repair & refurbishment whenever possible, and fully dismantled into super-cells or large cells if repair is deemed unprofitable. super-cell (module) SoH evaluation could be done, and defective ones would be sent to the shredder process.

This would in turn reduce the shredder equipment and conveyor sizing and power requirements as well as expedite the process by boosting reactivity, as it would not be subjected to process structural steel, or metal covers of large EV battery packs.

As for full battery pack salvaging and refurbishing, profitability could be drawn higher if an automated mating process to the battery terminals, to query the BMS was devised, to assess battery SoH.

Statistical studies should be done on the SoH of the LIBs cells and packs population reach recycling operations (excluding refurbish & repair). The higher the average SoH, the higher is the profitability of pack refurbishing operations.

Highest value would be generated by repair & refurbish operations of the large capacity packs such as 19′ rack LiFePO4, these operations could as well salvage large cells (in the dismantle case) that would come handy for refurbish operations or resell. Those large assemblies are less prone to shorts from human manipulation error compared to small arrays.

It should be taken into account that LIB e-waste feed may contain a substantial portion of fully working, low-cycle, recent date of manufacture battery packs, as the batttery may have been unpaired from a defective device, that is why BMS querying is imperative.

Indirect process – black mass post-processing.

What about EV batteries ?

EV batteries will probably require specific, dedicated plants for maintenance due to feed volume and large size, as well as a local repair and refurbish ecosystem (automotive mechanics, with elctrochemistry training).

Presence of an EV battery into the e-waste ecosystem would mostly be the result of :

-Loss of structural integrity of the pack (i.e. following accident)

-End of life for most cells or super-cells of the pack.

-No market for the pack (obsolete model)

Access to individual cells or super-cells is notoriously difficult in certain technologies : high density of screws, protection mats requiring a large amount of force to take out, etc…

https://pubs.acs.org/doi/10.1021/acsenergylett.1c02602#

https://www.frontiersin.org/articles/10.3389/fchem.2020.578044/full

Ltspice synchronous generator model in abc reference frame coupled with VAWT prime mover and DC battery charging load.

status : DRAFT, pending review.

Current ltspice model (zip file) in debugging stage, for reviews, without the VAWT prime mover :

Contains plt file, readme, net file, model asc file and png file plot example

Incentive.

The following work proposes an exploration of behaviour of medium size residential/agro/community wind turbines when coupled to a synchronous generator with field excitation.

Traditionnally, small synchronous generators with field excitation are usually found in small hydroelectric generation setups.

While most commercial devices in the wing generation category are fitted with PMSM (Permanent Magnet Synchronous Generators).

PMSM usually present lower inertia, include rare earth material magnets which contribute significantly to the overall cost of the generator, and which cost is expected to grow significantly due to undersupply and large demand for the EV market.

https://www.mining.com/supply-of-rare-earth-magnets-wont-keep-with-demand-by-2040-report/

Permanent Magnets are desirable for small to medium power sizes due to the absence of field losses, which contribute substantially to the total power loss of the generator for small scales, Beside magnets most PMSM rotor configuration use electrical steel to form either salient or round rotors. The main difference from an economical perspective is cost of copper or aluminum vs rare earth magnets as initial cost, and lower efficiency due to field excitation as a variable and recurring loss. These range in the 2% to 4% for operation at nominal conditions for a 75kVA machine. Our investigation will assume first a constant excitation, variable voltage operation, While control of excitation will be considered for dump load operation (braking) on a static resistive load, with the intent of thermal generation in high wind power conditions. Also, we will take into account the stalling speed of the VAWT and update the control law accordingly to prevent this phenomenon. We will also investigate alternative control laws for dump load operation and braking, under constant field and variable output impedance (simulating a servo-rheostat load). These are mainly of use for PMSM dump load operation (whose field is constant)

Synchronous Generator model part.

The model represents a synchronous machine with the following characteristics :

3 phase stator, wye configuration.

Variable field excitation, though the model does not implement AVR based on excitation control, at least for now. Variable field excitation is investigated for dump load (crowbar) operation.

Note that permanent magnet synchronous generator could be crudely modelled by this instance by setting an adequate static field strength (with no ramp up) and by setting a higher number of poles. Note that most PMSG designs do not use dampers.

Parametric number of poles. Note also that q-damper winding inductance is valid for a two pole machine, for p > 2, q-damper self and mutual inductances should be set to 0. (or the q-damper excluded from the circuit)

Rotor Saliency is modelled, through the Lms parameter.

Model is based on flux linkages. It specifies leakage, mutual inductance, and resistance of each winding. The model is ‘lumped’ in terms of inductances, i.e. it does not specify inductances as a function of winding geometry (number of turns, area, length, permeability.. etc.)

All mutual inductances are modelled (stator phase to each other stator phase, rotor field to each rotor damper, and stator phase to each rotor winding)

The default setup includes one damper coil on the d axis and one damper coil on the q axis, and uses a two pole rotor by default.

This makes 6 flux expressions with 6 components each : 1 flux component arising from self inductance, and 5 components corresponding to mutual inductances since there are 6 windings in total.

Due to a quirk in the Ltspice parser for arbitrary inductors based on flux expressions, I had to separate self flux from the mutual flux expressions.

Ltspice expects the current flowing through the inductor to be represented by the variable ‘x’. Thus I had to use the form Flux = Lself*x + sum(flux_linkages)

That is why the flux linkages expressions in the following screenshots have 5 components instead of 6. the final flux expression combining them all is specified in the inductor ‘Flux=’ expression, seen in Figure 5.

Figure 1 Natural inductances parameters
Figure 2 Stator self inductances expressions
Figure 3 Stator flux linkages expressions
Figure 4 Rotor flux linkages expressions
Figure 5 Stator equivalent circuit
Figure 6 Rotor equivalent circuits

Magnetic saturation effects are not modelled for now. Usually the field winding is driven close to saturation, which makes this modelling significant, at least for large generators.

Space Flux distribution does not model anistropies coming from the rotor shoe and stator wedges and slots physical geometry. These flux anisotropies give rise to EMF with harmonic distortion. It only models those coming from the saliency model, which is a first order approximation.

The model uses the LTspice arbitrary inductor model to express self flux and flux linkages. The windings thus use inductors as the source of emf, not behavioural voltage sources. The only inductor that is powered by a DC source is the field winding.

The main incentive of using abc frame equivalent circuit instead of a faster dq0 reference is that :

A model in abc reference frame has the advantage of being better suited for non steady state, non linear loads, islanded mode (not connected to the grid).

As an example, the model feeds a resistive load and smoothing capacitor through a 6 diode three phase passive rectifier. Despite the load non linearity, the model performs well.

The model does not drive the shaft at synchronous speed (steady state turns/min shaft rotational speed provided by the manufacturer), it takes as an input mechanical power from the VAWT, which itself is a function of wind speed (provided through the V12 source PWL input) and VAWT rotor speed. A steady mechanical input power, modelled through the V2 source, can be swapped instead of the VAWT for debugging purposes.

A further refinement of the model for MPPT modelling could make use of a WAV file input as the source of real world wind speed data to model gust surges.

Mechanical losses are modelled through friction and windage power losses of the generator assembly, which are assumed to be constant.

Inductance and resistance parameters.

We used natural SI units, not the p.u. system.

The main challenge with such a model in abc reference frame is that manufacturers specify alternator parameters as synchronous reactances, transient and subtransient reactances, machine time constants, and often in p.u units. Alternators are not designed per se to operate at electrical frequencies other than 50 or 60 Hz, and for larger models, these are also intended to be grid tied, so it follows that manufacturers provide parameters related to their intended use and standardized for the use of industry specific simulation software. ( for the larger, >1MVA models)

That means that, these have to be converted back to natural self and mutual inductances. Conversion to SI units from p.u. is straightforward, as well as reactances to inductances.

The real challenge is to derive abc reference ‘natural’ inductances from the previously obtained inductances.

Some might be measured experimentally, such as resistances, if the generator is available at hand. As for inductances, it is specially hard for dampers which are shorted and without external leads. For the other windings, it is also hard because inductances vary as a function of rotor position. One could get an approximation of self inductances by measuring one winding while all the other are shorted, and mutual inductances by measuring one winding and the mutual winding under investigation, while the others are shorted, and repeat these measurements under various rotor angles, to determine inductances minima and maxima. But damper windings inductances would still not be measurable.

The proposed method is explained here :

https://electronics.stackexchange.com/questions/678068/3-phase-synchronous-machine-with-salient-rotor-inductances-and-resistances-measu

It assumes that the stator is star (wye) wired and that the neutral point is accessible. The problem with the method is the frequency of the measurement by an LCR meter, often above 100kHz, which makes rise to skin effect that affects inductance measurement by lowering the measured value vs. reality, as well as the low current used in measurement which make it happen in a portion of the B/H curve which is not the one of nominal currents. Since electrical steel B/H curve is not fully linear, (permeability is not a constant factor) and lower at very low currents, it has the effect of also lowering the measured inductance. How that affects all the inductances ratios is a whole other issue. This method is probably more accurate for low number of poles, ideally two. Whether the proposed method has any practical and theoretical validity is not certain, so take it with a grain of salt. I personally could not find any resource that performed the experiment. A selsyn synchro, with a two pole rotor and three phase stator or a wye wired car alternator with accessible neutral and removed AVR and rectifier diodes could be put to the test bench with this method. Note that a car alternator claw pole rotor arrangement is not faithfully represented by this Ltspice model.

Usually, most experimental methods deal with the determination of synchronous reactances Xd, Xq.

The Electric Generators Handbook from Ion Boldea explains the method in chapter 4.

Theoretical determination of natural mutual inductances from datasheet reactances (synchronous, transient, subtransient)

The following paper propose a method to determine mutual inductances from datasheet parameters, it makes use of scaling parameters Kf,Kd,Kq to make the conversion, which yields an approximation :

(1)

https://studylib.net/doc/18754307/analysis-of-synchronous-machine-modeling-for-simulation-and

Analysis of synchronous machine modeling for simulation
and industrial applications
Barakat,Tnani,Champenois,Mouni

Issues with the model with the default parameters

Note that the 75kVA generator parameters used in the present Ltspice models are derived from the paper (1). Excitation field winding resistance specified in the paper (around 2 ohms) gives unbearably high field losses in comparison to power output at low prime mover input power, So it has been lowered. A field resistance of about 2 ohms is common for machines in that power range.

This methodology is questionable, but it adresses the fact that this generator is merely used as a proof of concept for the whole ltspice model, and that a more fitting generator should be used for DC generation from a residential VAWT. Whether such a generator with a lower field resistance would be physically possible to build all other parameters being equal needs further investigation.

There is also the potential issue of the order of magnitude of inductances specified in table 5 of (1), in the H range instead of mH, as inductances in the several thousand mH are usually found in generators in the thousand MVA power rating range

Determining empirically inductance parameters for the model to converge

Fortunately, there are some way to constraint inductance parameters in the range of physical realisability

Basically, the ratio of mutual inductance (stator to field) to the square root of the product of the stator and field self inductances. In the Ltspice model, rime varying self inductances of the stator are acessible through V(laa), V(lbb) etc… And self inductance of the field winding is Lffd plus the air gap self inductance.

This ratio can never be > 1, And in practice is low as the windings (field and stator) do not share a magnetic core, but are coupled through an air gap.

The following master thesis from 1976 tackles the issue of lower and upper bounds of physical realisability in terms of inductances. It is also one of the very few papers that provides numerical values for inductance parameters in the abc reference frame.

(2)

https://ttu-ir.tdl.org/bitstream/handle/2346/15506/31295015505711.pdf?sequence=1

The main issue, even when in a range of coupling conservatively low, is that the simulation speed is very sensitive to this ratio : if Lafd is raised from 0.19 above to around 0.69 or so (Laa0 and Lffd being constant) the simulation speed is reduced by a factor of around 2000. (Using the parameters of paper (1)) àThe mutual inductance between the field and stator windings is the main parameter influencing the magnitude of the induced Emf on the stator.

Also, Stator leakage inductance could not be derived from the paper, so it was derived independently. Note that bounding constraints arise from the fact that mutual inductance between windings cannot be higher than the square root of the product of the windings self inductances, assuming a coupling factor of unity. If an approximation of the coupling factor is done, based on fundamental air gap distance from the field shoe and stator slot, the stator self inductance upper limit can be derived. Moreover, a physically impossible parameter input gives rise to convergence issues or performance issues almost immediately in our model, Which helps in determining adequate values.

Of course, this methodology allows to model a somewhat physically realisable generator, not necesarily one available in the market.

As for the stator self inductance, it seems that it can be derived easily if zero sequence inductance is given in the datasheet, through the formula of L0 in figure 7, and Loavg in figure 1 of the present article.

Mechanical System Model

The mechanical system of a synchronous generator is based on the torque balance equation. It takes 3 inputs, namely electrical torque Te, mechanical torque Tm, and friction and windage of the generator rotor, Tfw.

Electrical torque Te is give by the formula input of the B12 source. This formula requires first the computation of the dq0 transform, also known as Park Transform, for flux and currents in the dq0 frame, as seen in Figure 7

Tm is derived from the VAWT model power output divided by the VAWT rotor rotational speed in radians.

Figure 7 dq0 transform
Figure 8. Electromechanical coupling. The V2 behavioural source is used for debugging.

Note that the V2 behavioural source is used for debugging, ramping to a constant mechanical power input. The actual source of VAWT mechanical power, source B21 is shown in figure 10.

The torque balance takes into account the torque ratio due to the gear box. generator shafts and VAWT shaft angular speeds are also proportional to the gear ratio.

generator shaft acceleration is given by the torque balance divided by the total shaft inertia as seen from the generator reference, that is, the contribution of the VAWT rotor assembly has to be multiplied by the gear ratio, and added to the generator rotor inertia.

Gear ratio is assumed to be < 1.

The model for the VAWT turbine is explained in detail in

(3)

https://www.econstor.eu/bitstream/10419/244151/1/1775681092.pdf

Figure 9 Mechanical assembly and VAWT parameters
Figure 10 VAWT mechanical model and wind input

Electrical load model

The electrical load is a 48V lead acid battery bank. The very crude model sets the battery electrochemical potential at 12.9V, under which no charging occurs. Battery internal resistance depends on battery capacity and state of charge (SoC). Charging current at a given voltage is initially slow for deeply discharged batteries because of high internal resistance of battery cells. It thens ramps up with decreasing internal resistance as SoC rises, to again decrease near end of charge, but not because of internal resistance this time, but because the electrochemical potential rises as battery charges. Due to very long time constants of battery charging processes, (several hours) at 0.1c/hour conservative charging strategy, it is unrealistic to simulate a whole charge cycle and thus a static model is sufficient.

The battery requires a DC charging circuit, ideally at constant current for the bulk charging phase. This is provided by a 6 diode 3 phase (passive) rectifier bridge, followed by a smoothing capacitor. This forms the output of the unregulated DC link.

For regulation, We decided to model a circuit that transposes well to a digital control strategy instead of an analog buck converter control IC mainly for two reasons :

Digital control allows full flexibility to implement a control system for the buck converter as well as field excitation control, wind sensor input, AC frequency input, fault detection input, battery parameters and monitoring input, etc.. It also allows experimentation to optimize the algorithm in order to achieve MPPT.

The second reason is simply to speed up the simulation, if proper care is taken to avoid convergence issues arising from some expressions like “IF” in behavioural sources, where in a lot of cases, it is better to replace them with differential Schmitt triggers.

A buck converter is preferred to a buck boost converter for this design to avoid drawing power at low rpm (when the VAWT power coefficient is low), which could lead to the VAWT turbine to stall. This logic is taken further to implement a threshold at which the switching starts from open circuit, at an input voltage well above the battery bank voltage. This forms the uvlo logic (under voltage lock out). Under that voltage (plus hysteresis), switching stops and the load is disconnected.

input overvoltage is a protection mechanism for the load, at which switching times are unreasonably short (low duty cycles) and exceed the operational enveloppe of the inductors, and Vds for the MOSFET. Of course, disconnecting the load at this point could lead to a runaway rpm overload of the turbine and generator and damage.

That is why wind turbine controllers have dump load terminals. in case of high wind speeds, or if the load cannot absorb more power because of low demand and or a fully charged battery, a crowbar circuit diverts power to an ohmic load, typically a large rheostat. This has the effect of lowering rpm and unregulated DC voltage at a given excitation level, thus protecting the turbine, diodes and MOSFETs. Whether the dump load performs useful work depends on the setup. for high geographic latitudes, high constant wind stations, the dump load can be used for heating an enclosed space to keep electronics at a safe temperature, it may also serve as a cold water network preheater, to avoid pipe ice damage. The use of non resistive loads like inductive loads, linking the generator 3 phases to, let’s say a 3 phase induction motor acting as a water pump is trickier, since care has to be taken that the crowbar activates at an electrical frequency within the motor frequency range. Frequent startups of an induction motor at high voltage and high frequency lead to a premature failure of a motor due to high starting currents. If that were not the case, star delta starters or VFDs would not be a thing. As for reactive coupling, synchronous generators provide reactive power, that can be consumed by induction motors. Moreover, the use of a passive crowbar triggered by high voltages may give rise to voltage waveform no sinusoidal in nature, even more so if the DC stage and regulator is not disconnected and keeps drawing power, as we will see later. It is recommended to devise a complemetary passive method to disconnect the DC stage when the crowbar operates, if one wishes to experiment with inductive loading of the crowbar circuit.

Grid forming or Grid supplemeting setups, using grid tie inverters are the most sensible way of performing useful usage of power, but they are outside the scope and intent of this article, that focuses on small setup islanded generation of DC power.

Let’s get back to the model now.

Figure 11 Load and Field electrical parameters

Here we specify the load (battery bank) internal resistance as well as the DC field excitation voltage.

AC/DC Conversion and Load Regulation, and battery charging

AC/DC conversion is straightforward.

As for the load, the model includes a basic buck converter used to charge a 48V battery bank. It does not take into account and additional DC bus to power a load besides the battery, also the control algorithm is just an example, gain coefficients are not optimized, and a state of the art charger would probably achieve MPPT based on a sliding mode control for better efficiency and to make the system state stay in its safe operational enveloppe. sliding mode control is part of modern control theory and an advanced design choice that is outside the scope of this article, and would be adequate given the complexity (large parameter space state) and non linearity.

Note that the control mechanism does not involve generator excitation control. This will be explored in a continuation article.

Figure 12 3 phase, 6 diode bridge rectifier
Figure 13. Basic ‘idealized’ DC buck converter for digital control strategy

This is an idealized proof of concept version since it does not make use of a gate driver using instead a VCVS (E1) as well as for current sensing (E2) and also uses ideal diodes, as well as a crude battery model. The rest of the control algorithm is meant to represent digital control. The use of a first order LP filter for in_fb eases convergence since the signal is noisy, Hysteresis and rise/fall times of the schmitt triggers also help.

Let’s cycle through the main parts of the controller.

V6 and B18 signals are fed to the differential schmitt trigger.

The schmitt trigger compares the signal that represents the duty cycle to a sine wave of switching frequency, with DC offset equal to amplitude, the result is a varying duty cycle square wave signal at switching frequency.

This signal is sent to a VCVS (E1) that is meant to represent a gate driver, like the IR2110S IC. Its gain is 3, so as to drive the high power mosfet to a sufficiently high Vgs voltage, at around 20V.

The duty (base) cycle is calculated with the standard buck converter duty cycle formula. instead of taking the maximum expected unregulated input voltage seen in most application notes, it takes the present unregulated voltage. unreg_dc(). All of this is multiplied by the efficiency factor of the buck converter.

In essence, this forms an open control loop which is based on theoretical values and if subjected to calibration would give better output voltage regulation.

Since this is not enough for real world scenarios and calibration is not always possible independently for each device, a closed control loop helps in regulating the output voltage. The open control loop formula only helps to give a setpoint duty cycle from which the controller should start switching.

The closed control loop negative feedback is given by the 2*(4 – (V(feedback) – V(vss)) term in the duty() function.

This ensures that the output voltage discrepancy from the open control loop is corrected. Note that there is no compensation network filter in the feedback loop. Those can be implemented in analog form or digital form.

Also note that duty_base() is 0 if the controller detects an uvlo condition. this protects the VAWT turbine from stalling by disconnecting the battery load.

dutybnd() is just a numerical conditionner to prevent duty cycles > 1

The boost() function ensures that the voltage, and hence the current flowing into the battery rises after the unregulated dc setpoint defined in the boost() function is crossed.

At this point, the feedback from the output will be dictated by the current control loop formed by E2 and D9 once the current threshold is crossed, and will oppose the boost function. The DC equilibrium point is defined by the crossover point of the boost linear function and the output feedback function, which mainly depend on their respective gains. It is preferable for the output feedback function to takeover the boost function early after the set charging cureent threshold to keep battery charging current, inductor current and MOSFET average and peak currents within their nominal ranges.

An additional protection layer is provided by the ovlo() function that kills the boost function. Once the ovlo threshold is crossed, the dump load crowbar should be activated to protect the MOSFET from high Vds.

Simulation performance considerations

Care has been provided in the DC load model to ease convergence. Simulation speed is inversely proportional to the rotor assembly rotational speed. Also, an important factor that slows the simulation is the switching frequency of the buck converter. Given the long time constants required to produce meaningful data, it has been kept at a really low value of f_sw=500 Hz, compared to usual converter designs.

Annex A: Generator fault testing.

The following circuits were used to test the behaviour of the generator under load rejection and 3 phase short conditions to check for adequate response.

Figure 14. 3 phase Short Fault circuit
Figure 15. load rejection test circuit

Annex B : Overvoltage and mechanical overload protection of the VAWT

High wind conditions and unability of the load to absorb power because of charge termination or low power demand downstream of the battery on the DC bus may cause mechanical overload of the VAWT, too high generator rotor speeds, too high fluxes, heating, arcing, and overvoltages that can exceed winding insulation dielectric strength and cause the winding to fail.

Some high end HAWT can be feathered by adjusting the pitch angle of the blades to decrease wind coupling. Variable pitch VAWTs or adjustable vanes can be designed to protect the turbine at the root level of the issue, but these complexify the turbine design.

The low cost method involves electrical braking by dumping excessive power into a dump load, through a crowbar mechanism, This will decrease RPM, but the whole assembly will still be subjected to high torque conditions.

As already mentioned, the crowbar can be implemented on the DC bus or on the AC bus. Since this is a critical safety mechanism, care is needed to make it work in a failsafe and passive manner, without any high level input from an IC or microcontroller, or at least have a completely passive system to supplement the active one.

A passive system on the AC bus can detect overvoltage that signals high wind conditions or underload, and be implemented through TRIACs, one for each phase, and triggered by a current pulse through the gate that is initiated when series back to back Zeners or a single TVS diode start conducting, Once the line to gate voltage is above their breakdown voltage. A current limiting resistor should be put in place to limit the current flowing to the gate below Igtm. Note that it is more a continuous trigger that persists while the overvoltage is present Note that the trigger ceases when the AC waveform goes below the TVS or Zener voltage. In that condition, the TRIAC still keeps conducting until current gets close to 0 in the AC current flow. After that zero cross, the TRIAC won’t conduct until the voltage threshold of the TVS/Zeners is crossed again.

<Check> The resulting AC characteristics seen by the dump load are not sinusoidal but chopped, and should not be used to drive an inductive load like a motor. The strategy mentioned before to use an induction motor as a dump load that perform useful work is possible through active switching by a contactor or relay for instance, if the motor is operating within voltage limits and volt per Hz limits that would arise in worst case turbine overload conditions at the time of startup. That generally means the the motor should be over-rated in terms of power, so it would operate well below nominal conditions, taking into account fluctuations arising from the unpredictable wind conditions that impart stress on the motor, and repeated start/stop cycles. An inrush protection device could be envisioned, and contactor hysteresis should also be taken into account to limit start/stop cycles. The TRIACs overvoltage protection would still be used at a higher overvoltage trigger level as a last resort protection and power three rheostats.

despite its higher complexity, the AC dump load strategy also has the advantage of diverting the current before the rectifier bridge, and thus limit current and heating stress on the rectifying diodes.

DC crowbar protection.

This design is easier to implement, requiring a single SCR and a zener or TVS triggering mechanism, before the switching mosfet stage. it provides unregulated DC (with a substantial amount of ripple) to a load, ideally a rheostat. Since the crowbar operates on DC, there is no current zero crossing, and it will stay in forward conducting mode as long as the generator and VAWT keeps turning and provided that the buck converter shuts itself off (zero duty cycle) to prevent the battery back feeding into the dump load.

Reverting to open crowbar can be done by shunting the anode and cathode of the SCR temporarily through a previously charged capacitor, so as the anode sees ground potential and the cathode sees Vdc. (essentially the SCR sees a reverse voltage pulse) The reverse discharging of the capacitor is accomplished through a MOSFET or relay. The capacitance of the capacitor must be sufficiently large to provide a discharging time constant longer than the Toff parameter of the SCR and also depends on the dump load impedance

Note that in this setup, the SCR is a low side switching device, the cathode of the SCR being at ground potential. See figure 16.

Dump load considerations.

We will focus on a resistive dump load as this kind of load offers the maximum flexibility and ease of design and safety of operation, as well as optimally controllable for adequate braking and overvoltage protection.

We will explore two control methods to perform adequate control of the dump load operation.

One based on electromechanical impedance matching : Instead of using a switching device to perform impedance matching, a servo actuated rheostat would be used. A rotary rheostat is prefered to obtain fast response using a stepper motor, However, for testing purposes, it seems that linear rheostat of high power > 500W are cheaper and more easily available. That would require the use of a linear actuator that has comparatively slower response times.

Note however that high dR/dt result in high torque fluctuations.

Also, A control law based on crowbar electromechanical impedance matching operation between a low and high setpoint of a fast reaction variable like DC voltage will introduce generator hunting effects and the mentionned high dR/dt, essentially the control mechanism resets the impedance at a high value once crowbar is deactivated (no current flowing through the SCR). This introduces oscillatory behaviour, which gives rise to constant high amplitude motion of the servo actuator, introducing wiper fatigue and high torque fluctuations, and a suboptimal braking. A control based on a slower variable like shaft speed is thus preffered.

Care has to be take for adequate thermal management of the rheostat, since it could operate on a substantially low fraction of the total winding number, and create hotspots. A forced cooling method or operation of the rheostat in a thermally effective medium such as transformer oil may be necessary, and would significantly increase device complexity due to safety requirements. This would involve the design of the dump load the same way as a large transformer is built. Produced heat could be used offsite.

Excitation control during crowbar operation, static impedance dump load.

The other method assumes a static load impedance and varies excitation level upon crowbar activation according to a control mechanism specific to crowbar on operation mode. In this case, part of the Joule heating is dissipated in the generator due to the substantial rise of excitation current to achieve adequate braking.

Direct Heat transfer through magnetic braking.

This method would add an intermediary rotor between the VAWT and the generator with a permanent magnet arrangement, Magnets should exhibit high Curie temperature, according to projected maximum temperature rise in the brake rotor through radiative and convective effects. A claw like copper heat sink would be engaged radially to modulate the braking effect arising from Eddy current induction. The copper heat sink would be fitted with copper heat pipe L shaped protrusions, that would sit partially in an effective thermal medium stored in a tank. The issue with this approach is that engagement of the claw is a mechanical process that would involve linear horizontal heat pipe motion, which would give rise to a challenging task of making the thermal medium tank airtight, let alone pressurised. Another issue are axial and radial force components on the claw mechanism, which would require adequate sturdiness of the claw engagement/disengagement actuator. Economic factors should also be taken into account, as permanent magnets are costly due to the rarity of the source materials.

Prevention of VAWT stall.

<to be continued>

230VAC to 48V, 1400W Lead-Acid battery bank charging circuit

Model download at the end of the article

This is the complementary circuit to the 48V to 400V converter, doing the opposite conversion

However, it is presented here as a simple charger directly tied to mains without PFC.
The input line filter has been omitted for simplicity.

Since the charger assumes the presence of an AC link (230V AC for that design), logic power is supplied by two small AC/DC 50 Hz 3W transformers.

They are not modeled precisely to the specifications in this design.

The first transformer powers one LM317 regulator to provide bias voltage of 10.5V to the switching LTC3721 IC and to the optocoupler transistor. While the second transformer powers one LM317 set at 5V regulated output for the MCU, the LT1006 op-amp, and the AD8418 current metering op-amp, as well as the LM113 voltage reference diode used by the AD8418; And another LM317 with an output voltage set at 12V. The LT1001 used in the Howland current pump requires a 12V supply in order to source an adequate current level.

The circuit has been simulated up to 25A charging current for a 48V SLA battery bank. (Assuming an individual battery voltage of 12.4V)

Due to the simplest model used for the battery bank, the actual behavior to reach a steady state may be different.
CC is achieved by high-side current metering, whose signal is compared to a bias level output from a DAC.
The higher the DAC output level, the higher the charging current.

High-side current metering is preferred for chargers since it can detect load shorts and offers better noise immunity.
Here we use a dedicated AD8418 IC for that purpose.

This approach is failsafe in the event of a MCU failure since a 0V DAC voltage output would command a 0A charging current.
CV for float/trickle charging is achieved by varying the wiper position of a 5K I2C digital potentiometer, controlled by the MCU.

Note that the circuit has been tested on an ohmic load (10 and 25 ohm) for stability.

It could also be used as a versatile CC/CV PSU besides charging

Using the circuit as a charger for a 24V battery bank could be envisioned, but has not been tested for performance and stability at the time of publication of this article

As for the rest of the circuit, it is more or less the same as the 48V to 400V converter from the preceding post.
Due to higher output currents on the secondary, choke, the output power level has been derated.

As for stability, there is no perceivable ripple in the output up to 25A.

Assuming a charging current of 0.1*C (C being the battery capacity in Ah), This charger could theoretically charge a battery bank of four 250Ah batteries at a nominal rate.

This circuit is the simplest expression of a CC/CV charger. It does not perform a battery bank voltage check before charging, temperature compensation, or coulomb counting. These features require further MCU / digital side control and are not expected to be modeled properly in Ltspice.

Speaking of digital control, It is expected for the MCU to monitor the charging current through the AD8418 so as to set the DAC voltage “curr_offset” to perform the appropriate charging program, as well as to monitor the bank voltage, as a digital control outer loop.

Efficiencies (simulated)

Pout/Pin. Pin taken at the node after the full bridge rectifier.

5A0.951529
10A0.945238
15A0.942658
20A0.93861
25A0.933888
30A0.930961
CC mode (stepping curr_set DAC voltage)
CV mode, stepping 5k CV digipot. 10 ohm load.

Ltspice Model Download :

Ltspice Model of an isolated, 48V to 400V, 1600W, Push-Pull converter using Chan model inductors, using the LTC3721 IC.

And detailed design information

Disclaimer:

The following design comes with no guarantees whatsoever. Although a decent amount of time has been spent to ensure that the model works well over the whole range of its design constraints, some errors may still linger, Some more experienced engineers may find some design choices questionable, If you’re able to optimize it or build something better around it, that’s nice.

The choke inductor and the transformer may be a little over-engineered and drive the costs up, given the large ferrite core choices.

You may be able to extract more power than 1600W, but tread carefully.

Additional Resources :

The download is available at the end of this article.

There is also the ‘sister project’ to this one, which performs 230V AC to 48V DC conversion, with the intent of battery charging : It is designed to allow current/voltage control via DAC and a digital potentiometer, so you can digitally control voltage and current and design a charging program.

It is available here :

Abstract:

The main goal of this model is to serve as an aid in learning about the Push pull converter topology,
it also could prove useful when building a prototype, as a part of a UPS or solar converter design.

Care has been taken not to overburden the simulator and allow reasonable simulation speeds.

Most of the simulation time is spent ramping up the voltage (soft start)
Using smaller starting loads and stepping them once the converter is fully started decreases simulation time, Once you’ve determined that the soft start ramp is ok.

Intended Audience:

Makers or junior power engineers with little experience, looking for a project that can lead to prototype build.

Three models are supplied :

  • The fastest one uses linear inductors, voltage sources for IC power, no isolation, and a simple feedback circuit without optocoupler isolation.
  • The second one is the same as the model above but with non-linear inductors (Chan model)
  • The full model uses proper and more realistic component DC supply schemes as well as non-linear inductors, as well as isolation. Note that Ltspice is not always able to process isolated secondary circuits, even with the use of a stitching resistor, unless it is very low resistance. Here the problem appeared so we linked both grounds. A practical circuit, of course, will not have that constraint.

Design Parameters

  • Max continuous power, 1600W max. Inductor Thermal effects are not modeled, derating may be advisable, Although a large saturation margin has been taken into account.
  • Input from 48V (discharged battery, UVLO threshold) up to 57.6 (bulk charging voltage when a charger is connected)
  • 400V DC output. it supplies the same voltage as a PFC would.
  • This allows easier load switching or load sharing between the battery source and the AC / PFC source converter and can be adapted to larger designs.
  • Optimized for high power.
  • Moderate to good efficiency 0.93
  • Low cost. (uses powder core inductors)
  • Fully isolated design.
  • Under Voltage lock-out set at 48V to prevent battery bank over-discharge
  • 4mm² wire for primary of transformer, 2*24 turns, center tapped
  • 1.5m² wire for secondary of transformer 2*234 turns, center tapped
  • T400-26 transformer core (Iron powder)

As said before, it is for teaching or training rather than commercial purposes.
It will require hand winding of the toroid to build a prototype. (which is a cumbersome and long process, It is advised to watch several videos about that art to do it properly the first time. (You do not want to rewind it a second time). Building a toroidal transformer is a valuable learning experience.

Some Toroid Winding tips :

Use proper dielectric insulation between windings to control parasitics, it also serves to protect the windings from abrasion.

  • Use a counter to keep track of turns.
  • Keep the winding tensioned so it does not spring back.
    You can also ask toroidal transformer shops to build you a custom transformer according to your specifications.
    Proper care has to be taken to balance the primary windings around the center tap as equally as possible to avoid flux imbalance. Fortunately, the number of turns of the primary is low.
  • Strategically place the tap halfway through the core height, and wind the primary legs as symmetrically as possible, by adequately controlling the winding pitch. Looking at resources explaining the proper way of transformer tapping is advised.
  • To make enamel wire solderable, I usually use abrasive dremel cylindrical tips. Do not use a flame as the enamel is very flame resistant and that will anneal the copper making it more fragile.
  • You can also watch videos from HAM radio makers, as a lot of them have mastered the art of core winding. A balun is not exactly the same as a push-pull transformer, but a lot of practical building tips apply.
  • Always check the final product inductance while testing on exposed wires, with a margin of excess wires so that you can always add turns if inductance falls short. Fortunately, transformer inductance is not that critical, what is critical is balancing and the turn ratio.

Note that It will be hard to find an exact turn ratio from commercial solution catalogs, but you can always look for and adapt the design accordingly. A different ratio will affect the minimum and maximum duty cycles of the converter which could make achieving the 400V target harder at the nominal output load of 4A.

Capacitor Considerations

Input capacitors do not need to be large because of the low battery impedance and stable voltage.

Output capacitors should be low ESR, We choose expensive high capacity, high-voltage electrolytic capacitors to maximize hold time and for lower ripple, and to make the choke
sizing reasonable.

Hold time considerations mostly depend on the load parameters
Additional 450V MPP film capacitors were used. Do not use X2 line capacitors as they are designed to fail short.

MOSFET considerations.

In our design, each MOSFET is subjected to 25 A average current with peaks of up to 120 A due to parasitics (at turn-on), with low-value gate stoppers resistors.

Infineon’s IPP110N20N3 are rated at Id of 88 A and pulses up to 352A.
The datasheet is available at :

https://www.infineon.com/cms/en/product/power/mosfet/n-channel/ipp110n20n3-g/

Thermal management of MOSFET is of utmost importance a clever prototyping solution could make use of radiator/fan bundles designed for CPU cooling, as they often integrate heat pipes, This would complicate the layout significantly and the total prototype volume because of the sheer bulkiness of these kind of components, and require drilling the heatsink backplate. A single modern and standard CPU heatsink fan can cool loads of around 100W. Calculate maximum MOSFET losses and design the solution accordingly.

Never operate MOSFETs under load without proper thermal management!

Overcurrent Protection.

The Switching IC provides a hard current limit that stops switching in case of overcurrent and a resume algorithm explained in the datasheet. For soft current limiting and output CC, you’ll have to implement a current monitor yourself and a feedback signal that overrides the CV signal fed to the optocoupler in case of an overcurrent situation that would decrease output voltage.

Transformer considerations: turn ratio, switching frequency, minimum inductance.

Check this resource for basic formulas.

https://www.analog.com/en/technical-articles/high-frequency-push-pull-dc-dc-converter.html
Also, check :
http://tahmidmc.blogspot.com/2013/03/output-inductance-calculation-for-smps.html

Inductance range tolerance is high since the push-pull converter is based on transformer action. It is not a critical design parameter.

The allowable duty cycle range will dictate the maximum voltage differential between the primary and secondary, in combination with the output voltage range. The fact that the input could already be thought of as regulated (it is a battery, but may be subjected to higher voltages seen by the converter during charging) and that the output voltage is designed to be kept constant eases the design.
It is however important not to drive the core into saturation, So we have made the choice of a large T400-26 iron powder core, although thermal effects (and the decrease of inductance caused by it, will play the limiting role rather than saturation. Here the margin is <add figure>
A larger core also allows a lower fill factor which will improve cooling and reduce ohmic heating from the current flowing into the conductors.
It also makes manual construction easier.
As said before Controlling the winding balance in the primary is critical in Push Pull converters.
An imbalance gives rise to a Bias flux buildup that decreases efficiency.

A low fill factor also allows better cooling performance, even more so if forced (fan) cooling is used to blow axially through the core.
When building a transformer, optimization is complex because of the large parameter domain.

For such a design, it is advised to perform a faster simulation using a standard LTspice coupled inductor,, and specify coupled linear inductors inductance based on Push Pull converter design formulas, and check that the circuit performs ok on that basis. taking care efficiency changes into account with each parameter changes, and performing load stepping, and voltage input range stepping. Efficiency should be high using a fully linear transformer.
If not, something is amiss. Remember that efficiency also depends on load, and is usually lower at very light loads.

On the basis of this first design iteration look at the current flowing through the primary (RMS and peak values), and knowing the number of turns you’ll get the H field strength, which will be used to get the B field strength. B = H/(µo*µr) Then you can look at material tables and check that you are operating within a safe margin. It is a bit complex because there are derating factors, for instance, because of frequency.

Thermal runaway is the situation you want to absolutely avoid (It decreases inductance, which increases the magnetizing current, making the core saturate even more and induces losses which translate into thermal runaway)
Our strategy is low-cost-driven. We choose low permeability iron core and decreased switching frequency to minimize core losses. (iron powder cores perform better at lower switching frequencies)
Iron powder cores are low permeability because of the distributed air gap between magnetic particles, and exhibit high (around 1.5 T) and soft saturation.
For the main transformer, we choose the low-cost 26 Material, and selected a large T400-26 core, allowing higher fill factors. The transformer turn ratio requires a large turn number for the
secondary.
For the output choke, we used a lower permeability 18 Material and a quite large stacked core. Output inductors/chokes operate at a high current bias level so that it derates inductance. Inductance value is also a critical design parameter. When using a suboptimal (too low inductance, output ripple, and inductor heating will be higher, with also a risk of thermal runaway and gradually decreasing stability.) We noted that when using too low inductances, the design refused to reach our target voltage, Which seems to be a protection feature of the converter IC.

To sum up, We have to obtain an inductance large enough for our filtering goals at nominal power while
reducing the bias B field. B field is proportional to the number of turns times permeability, all other parameters being equal, while inductance increases with the square of the number of turns times permeability. Thus it follows to use a larger core with lower permeability to accommodate a larger number of turns to meet inductance requirements while staying under saturation levels. To lower the B field, we also used the stacking strategy to increase the total compound core area. It is easier to wind that way than individually winding inductors and placing them in series, on the board, especially if PCB real estate is a concern. It would however decrease cooling efficiency. Not seen much about that method in commercial products, as it could also increase flux leakage. Better test it.

EMI concerns.

The low-frequency operation of 25Khz reduces EMI concerns depending on regulations on that VLF band and the other components that may be subjected to it. It is above the audio hearing range, but some harmonics may find their way into audio equipment (The IC switching frequency may go up to 1Mhz, changing the frequency would require choosing a better-suited, ferrite instead of iron powder material and adapting core dimensions, usually smaller ones. However, we had trouble making the simulation run smoothly at higher switching frequencies. Higher switching frequency also has a dramatic impact on simulation performance, as the minimum timestep has to be lower.

Of course, general layout guidelines apply, such as reducing the loop area of switching components traces paths (MOSFET drain to source) and the length of gate signals. Shielding is an option if it does not interfere with cooling.

Lower frequency however could make the core produce an audible stridulation effect, because of the magnetostrictive effect that is close to the audible range.

Core design helpers :

Our advice is to use a Ltspice core test bench using the resources here :
https://www.eevblog.com/forum/projects/arbitrary-%28saturable%29-coupled-inductors-in-ltspice/

For more information about the Chan model :

https://ltwiki.org/index.php?title=The_Chan_model

You will also need these very useful resources :

  • Magnetics catalogs and materials datasheets from major Western manufacturers: TDK Epcos, Ferroxcube, Magnetics.inc Micrometals,
  • The same for Asian ones: JUNCAN, Tangda, Caracol, etc… A popular seller is Tangda if you need to source (relatively) cheap cores from China.
  • The B/H curves when you can find them, if the Chan model parameters are not specified.
  • Magnetics cross-reference lists, images and pdfs. This will make selecting cores a littles easier, when switching from one manufacturer to another.

If you need to look at a B/H curve or if you have experimental data from a B/H curve tester (usually pluggable to an oscilloscope), you will need to find the crossing points on :

the B axis (y) : for Br (B field – remnant) and Bs (B field – saturation)

and the H axis (x): for Hc (H field – coercivity).

Bs is the saturation (horizontal asymptote line B value) for hard saturation, for soft saturation it is determined differently, as the B field keeps increasing, albeit with a lower and lower slope.

A good rule of thumb for soft materials is to stay in the linear region, with a good (30 to 40% margin)

Fortunately, Chinese manufacturers provide the B/H curves and Br, Bs, and Hc.

With all these collected data, you are ready to test the cores in the Ltspice Chan model test Bench.

  • Use the manufacturer-supplied geometric data OD, ID, and Height. We updated the test bench with geometric data calculators for the magnetic length and area required by the model.
  • Input magnetic data for the Chan model, taking care to use SI units: Hc (A/M), Br (T), Bs (T), (Amperes/m and Tesla)

Alas, Western Manufacturers usually provide the Al value and Bs (B sat) but almost never an exploitable full BH curve. You will need to contact them for this, but it may be a trade secret, who knows?

What you can do however is parse scientific publications to get harder to find values such as Hc and Br, and the problem is that they come for generic alloys (says MnZn, or very exotic ones) A comprehensive database of core parameters is clearly needed at this point.

If the data is in Oersted and Gauss, multiply the Oersted value by 79 to get A/m and for Gauss divide by 10000 to get Tesla units.

Some reference data, mostly for iron core materials collected that you may find useful :

An important note that I have not confirmed at this point: Note that the Iron powder materials data found on Chinese resellers are (presumably) for a pure (no distributed gap) material, thus if you plug the data into the mag_inc_bias.asc, you’ll find an abnormally large Al value.

So the strategy is to set primary_turn in the test bench model at 1, and play around with the core Chan model parameter Lg (gap length value in meter), until you obtain the nH/t^2 that is specific to the core.

Remember to set I_bias to 0. There is also a 60nH inductor in series, that would need to be set to 0 for adequate measurements of very small inductances.. I have no idea what is the purpose of this.

SUMMARY OF INDUCTOR TEST BENCH SETUP

  • Create one asc file based on mag_inc_bias for each core (makes life easier)
  • Fill in geometric data
  • Fill in material data (Hc, Br, Bs)
  • Set primary_turn to 1.
  • Set I_bias to 0.
  • Find distributed air gap equivalent gap length L_g by trial and error (examine inductance .meas in the error log) until the inductance value is equal to datasheet Al.

This is particularly useful to test for inductance decrease due to I bias current (seen in the output inductor choke)
If you need to lower the testing frequency, you’ll need to increase the simulation time because the measurements use 15th/16th cycles of RISE/FALL for inductance measurements, otherwise, you’ll get “measurement failed”

Inductance measurements are required for the choke, for the transformer, just check that the B field remains under Bsat with some margin.

Note that the I1 source is used for inductance measurements (It is set at 1mA, thus the x1000 factor in inductance measurement)
The measurement is based on the formula V = Ldi/dt, L = V(dt/di) = V/freq2pi at zero cross.

Increasing I2 will decrease inductance This is used for the choke measurement (under DC bias). Test with I2 value equal to the max allowable output current, with the frequency set correctly and the choke turn number set correctly.
Verify that the inductance value is still above requirements and that the B field in Tesla is not above Bsat.

As a final note, It should be said that the Chan model has been superseded by the Jiles-Atherton model which shows better fidelity to the experimental BH curves.
Unfortunately, Ltspice models using the JA model (CoreJA) are prohibitively slow for use in power product simulations, But the test bench could be adapted using CoreJA. The advantage of the Jiles Atherton model is that you can find a database of JA parameters for a lot of cores in the magnetic.txt file of the ZZZ library. This is the famous Bordodynov library (also known as the Yahoo Ltspice group library or the Ltspice groups.io library) It is a must for every serious Ltspice user.

Software also exist that help in complete solution design with an emphasis on magnetics, such as ‘ExcellentIT’, and also good product finders on manufacturer’s websites, to help in core selection.

Once you have made a provisional choice for the cores, turn number and turn ratio, you can replace the linear models using inductances with the Chan model using the turn number.
The Chan model slows the simulation speed only very moderately.

Isolation

This is an isolated design, However, Ltspice complains when using separate grounds, unless stitched by very low resistors. here we used a 0 ohm resistor between GND (primary ground)
and COM (secondary ground) in practice of course there is galvanic isolation.

Optocoupler tuning

We used a TL431 to provide a stable 5V reference required by the optocoupler output transistor.
To provide current to the optocoupler diode we use a modified (improved) Howland current pump.

Using a simple shunt resistor of around 240k to control the current flowing into the diode induces noise in the simulation. It should be tested in practice. The advantage is that such a solution would be passive and not require a low-voltage DC supply operating on the isolated side.

More information about the improved Howland current pump is available here :
https://www.ti.com/lit/an/sboa437a/sboa437a.pdf

Compensation network

Compensation network time domain testing :

Replace the passive resistor load with an active load (flagged as load)
The compensation network can be tested for stability by stepping the active load, and examination of the induced voltage oscillatory response, its amplitude, and its damping characteristics.

For more information on compensation networks :
https://www.analog.com/media/en/analog-dialogue/volume-57/number-2/step-by-step-process-to-calculate-a-dc-to-dc-compensation-network.pdf

Ltspice (the latest version) also offers transient frequency response analysis. It combines transient analysis (so that the circuit operates normally), while a small signal
stimulus is provided on the input voltage side. The small signal response on the output is analyzed so that a Bode Plot can be drawn and analyzed for stability. (checking gain margin and phase margin taking into account the switching frequency vs the frequency location of poles and zeros.)

Combining frequency analysis with a transient analysis has the advantage of not requiring specialized frequency response models for IC (When they are available)
In this model, the input voltage is stable, Output capacitance is large with a low ESR, which helps for stability. A good test would be to introduce a disturbance by simulating a charging operation in the bulk (constant current) charging phase.

Powering up the IC, the Optocoupler, and the current source OpAmp

The switching IC has access to the primary side power battery power. As it is a well-behaved supply, no need to power the IC from an auxiliary winding from the main transformer. Powering the IC is documented well on the IC datasheet. In our case, however, the design is simpler.

Note however the presence of the R33 resistors that shunts some current from the primary DC link into the IC, charging the capacitor faster, than what the LM317 alone would do, and allowing the IC to start faster, The datasheet uses a 2k value for ar 12V primary, we just scaled it linearly.
In this design, we used a simple LM317 regulator, which may be used to power other logic loads. The LM317HV version tolerates the battery bank voltage. You can also use a lower voltage version and power it from a single battery unit closest to ground, Which would have its positive terminal sitting at 12V above primary ground. Note that the IC is internally regulated at 10.5V or so, and can operate as per datasheet with as little as 8V. We found that it needs 10.5V during startup, and we set up the LM317 to supply a constant 10.5V. The absolute maximum rating is 12V. We also used a pre-regulator high voltage 60V Zener to protect the LM317 in case of a voltage transient. (Which could come from the charging operations)

For the secondary, things are a bit more complicated. The only active component here is the OpAmp of the Howland current pump driving the LED of the optocoupler.
In reality, it is almost guaranteed that other logic or control components will be operated at low voltage with a secondary ground reference. Thus we used a 5V setting for the secondary side LM317, This low voltage did not seem to affect negatively the operation of the Howland current pump opamp

We could assume that the 400V link always has access to power, for instance, a rectified mains AC power source output from a 400V PFC unit.
In UPS and solar applications, that may not be always the case, take as an example the “cold start” of an UPS from the battery in the absence of mains power.

The absence of power to this component means no voltage feedback signal to the switching IC. It needs to power up quite fast (well before the secondary reaches 400V DC)
For this, we use a secondary auxiliary winding, a rectifying diode, and an LM317 set up for 5V output to power the OpAmp. The LT1001 Op-amp. is fully turned on at around 2.5V

An optional Zener could be added as a TVS function to clip transients above the LM317 rating.

MOSFET parasitics, ringing, and leading-edge current spikes.

Figure 4 shows leading edge current spikes, they are not associated with ringing (as they are fully damped). The following thread identifies the culprit as being the reverse recovery time of the secondary side rectifying diode as well as the gate pulse. To minimize these effects, One can use fast recovery diodes for the secondary rectifier, as well as to increase the gate stopper resistor values (but the latter has drawbacks, as we’ll see in a moment). Reducing these spikes by using fast recovery diodes may increase overall efficiency, as well as decrease HF EMI (the spike frequency is substantially higher than the switching frequency).

In our simulation, the current spikes are well under the 352A max pulse current specification of the MOSFET, so it should not damage the MOSFET over the long term. (When using standard silicon diodes for the secondary’s rectifying diodes.

https://e2e.ti.com/support/power-management-group/power-management/f/power-management-forum/680959/lm3481-lm3481-current-spike-when-mosfet-turn-on

Voltage transients and ringing.

Although this model does not exhibit this unwanted phenomenon, it is probable that a real-life implementation would because of parasitics that are not modeled here.

Ringing comes from MOSFET parasitic capacitance, coupled with the driven circuit (a transformer) inductance as well as trace inductance. Most of the push-pull converter designs come with some sort of snubber (RC series circuit across drain and source, tuned to the problematic ringing frequency), However, Value tuning is quite layout dependent. Having broad but short gate traces also helps in the management of the problem. make sure your layout accommodates some room to add a snubber.

A resistor gate stopper (here 1 ohm) also may help, but its value cannot be pushed too high: You also have to take into account gate capacitance. A large gate capacitance cannot suffer from a too-large gate stopper resistor, or the MOSFET will turn on slowly, and the slow turn-on will increase average Rds. On the other hand, a too-low gate stopper could push the gate currents above the IC specifications, especially if the MOSFET gate capacitance is high. (Which is somewhat the case for high-power MOSFETs) Thus a snubber seems like a good option.

Remember that this part may exhibit different behavior in real life due to the non-modeling of all parasitic effects and their layout dependence.

Using a 4-ohm resistor instead of 1 ohm decreases the peak pulse current from 210 A to around 160 A.

Figures

Figure 1: Voltage ramp-up / soft start. Load stepping after steady state is reached
Figure 2: Voltage transient / Stepping load from 0A to 4A
Figure 3: Voltage transient / Stepping load from 4A to 0A
Figure 4: MOSFET Drain current (spikes due to gate pulse and secondary diode recovery)

Model Information

LM317 model as well LTC3721 should be present in a recent Ltspice installation.

You may need Infineon’s IPP110N20N3 model.

This model has been tested on a Ltspice installation using the ZZZ (Ltspice groups.io community library), it is advised you install it.

Download

SC-120W 12V PSU with UPS function review.

or How to battery backup your router the proper way.

This unit is a good choice to provide long UPS time in an event of a power outage for small but critical loads like an Internet router, Where a full 110V/220V UPS is overkill. A typical internet router power rating is around 15W. Assuming the SC120W is wired to a 60Ah 12V battery and assuming 50% discharge, the router could keep working for almost 20 hours in an outage event.

Voltage Setup.

There is a white screw near the battery cable. This regulates two outputs:

  • The set float voltage of the battery (for a lead acid battery, this should be always set up at at least 12.9V +/- some voltage according to the ambient temperature, but it will be always more in a realistic scenario, the float voltage is between 13.1V and 13.8V. That means that this unit is not made for frequent outage cycles, since it would also cycle charge the battery, and charging is usually done at higher voltages.
  • This screw also regulates the working voltage of the load, minus a diode drop it seems (that is, the load voltage will be the battery float voltage minus 0.6V)

So, for example if you set the battery float at 13.8V, the load will be supplied 13.2V. Routers have voltage regulators inside. As a rule of thumb, electronics tolerate +/- 10% deviation of the nominal supply voltage which is 12V for most routers, Putting 13.2V into the router could damage its regulator over the long term, leading to reduced lifespan.

The proper way to do it : Insert a DC/DC step down module like these based on the LM2596A between the SC120W and the router,

DC/DC Step Down to convert 13.8V into 12.V. The 3A rating is adequate for most routers

and adjust the step down module to output 12V, now you can set up an appropriate float voltage for the battery without damaging the router.

Charge current limiting.

There are no regulating screws for charge current limiting, so, if you set the voltage too high and the battery is discharged, it could draw current up to the overcurrent protection limit. I did not push the test unit far enough to see if it is 10A or less at (the PSU nominal power rating), Nevertheless, it is recommended to charge a battery at a current no more than 0.1 * total capacity. If you use a small battery, and there are some outages, reduce the float voltage accordingly. Best is to use a large battery so you can setup a higher float voltage, its lifetime won’t be affected by the charging currents which would fall mostly under 0.1 * total capacity, and it will benefit from higher float voltages.

Optional setup for frequent and long outages.

In case of frequent and/or long outages It could be theoretically possible to use a dedicated battery charger to charge the battery using a bulk charing constant current phase, a topping charge, and a trickle charge phase commonly used in “intelligent” chargers, in addition to the SC-120W. This would be done by connecting the battery terminals to such a charger, and a diode between the SC-120W and the battery, to prevent the SC-120W from backfeeding the battery, that is, current flow would be restricted from going from the SC-120W to the battery, only allowed from the battery to the SC-120W. Charging would be performed only by the added dedicated charger. Such a charger should be a model that allows for continuous use.

Overall, it would serve to protect the SC-120W from having to deliver large currents to charge the battery up to its nominal capacity after a long outage where the battery is deeply discharged.

The downside of that setup would be the additional voltage drop from the diode between the SC-120W and battery while operating on battery, as it could bring the voltage into levels below 12V

Ideally, the voltage setting of the SC-120W would have to be setup so that it is more or less the same of what the charger voltage is at its max value minus the voltage drop of the diode, and that is program-phase dependent, To complicate issues more, the voltage drop of the diode also depends on current. Some chargers output high voltages for short periods of time in the equlization phase. Best is to have a good charger datasheet that explains well how the charging is done and allows for a good amount of configuration. Having voltages more or less equal could help in the regulation of the SC-120W. As the circuit of the SC-120W is not published, it is hard to say what would be the behaviour of the unit if it is setup at, let’s say 13.2V and a charger outputs 14.4 V, the SC-120W would then see 13.8V (accounting for the voltage drop of the diode added between the battery and the SC-120W), while its setpoint is at 13.2V

I encourage you to perform extensive and careful testing if you wish to go this way.

Behaviour while in UPS mode.

Voltage regulation behaviour testing should also be performed when the unit is on battery (without AC). Unfortunately, I lapsed to note data from this part in my testing protocol. If I remember well, the voltage seen by the load is then the battery voltage minus a diode drop. That is, the SC-120W does not perform dc step-up conversion to keep the voltage at the setpoint of nominal operation (on AC) Again, this should be tested, as designs may change over time. I will update this material when able to perform the test.

Behaviour on battery power loss / cold start mode

One peculiarity of the unit : If the battery connection is lost during a power outage, and is subsequently regained while the outage remains, or if the unit is connected to a battery to supply a load while it has no AC power, the load will not come online again by itself. There is a little button near the voltage potentiometer that will force supply power to the load, this function is known as a form of cold start.

It may also be possible that this function activates if the battery voltage falls under a certain level, to protect the battery. but I have not tested it.

LED indicator strip

There are three LEDS, on a strip connected to the unit through a ribbon cable. This is practical for industrial front panel installation.

  • One shows that AC is available (red LED)
  • One shows that DC power is supplied to the load (green LED)
  • One shows that current is flowing between the SC-120W and the battery. (red LED) It glows brighter under high current conditions. I suspect this is the “battery is charging” indicator. When charging current settles down, the red LED light slowly turns off. I have not tested if this LED also glows when high current is drawn from the battery.

A final warning.

The battery cable of the SC120W does not have inbuilt fuses. It is always required to implement fuses on battery links. Add a section of cable with male spade connectors (the SC120W battery terminations use female spade connectors) with a fuse in it.

Overall rating.

The unit is sturdy and the UPS function performs very well, (for over a year) and it charges/discharges the battery as advertised. There is 0 downtime during switching events. The only down sides is that is a CV control mode only (no constant current charging phase) and that the system load voltage is related to the float voltage of the battery, Given the price of the unit that’s overall worth it.

There is also a 180W unit. This could also proove a good choice to provide battery backup to larger DC loads like NAS appliances, Beware, some units require a dual +5V + 12V psu, replacing that PSU by the SC180W would require two instead of one, higher rating DC/DC step down converters.

Thermionic VST3

What is Thermionic?

Thermionic is an FX VST3 Plugin.
It is waveshaping in nature by applying a custom dynamic range compression followed by a single-ended triode amplifier.
The single-ended triode circuit uses a SPICE algorithm to solve the circuit state.
It can do both very low THD and heavy distortion/waveshaping effects.

The idea for this plugin came after designing this waveshaping compressor in LTspice. Then I thought that it would be cool to add a final triode stage.

Custom compression :
The compression applies a linear-to-log conversion with an additional coefficient (parameter k) that alters the transfer function to provide varied waveshaping effects.
Also, the knee effect can be tailored to provide distortion at the knee level. It is roughly the digital implementation of the LTspice knee-breaker compressor that I discussed in a previous post.

As for the triode implementation, it uses a classic 12AX7 single-ended configuration circuit. with AC coupling at the input and output stages for the default preset, with additional presets available.
It features a cathode bypass resistor and capacitor, grid current emulation, and Miller capacitance.

The intrinsic triode parameters are based on Norman Koren’s model and allow the emulation of other models of triodes.
Grid current is based on Rydel’s model.
Non-realistic Vact parameter is added to start grid current distortion at an arbitrary voltage input level, disregarding biasing.

Since the state-based nature of the circuit, some parameter combinations may result in divergence and are non-recoverable, but it is rare, Some parameter combinations also require more CPU time to find a solution.
So, it is advised to tread carefully when modifying parameters.

Saving a preset / restoring a preset is possible and the simulation should start all right but with a delay of a handful of seconds to obtain convergence.

abstol defines the SPICE solution tolerance and improves quality at the expense of more CPU load.

The plugin also features an experimental routing of the gain reduction to some triode parameters.

Additional features :

  • Oversampling from 0x to 3x.
  • Auto Make-up gain on/off with trimming.
  • Compressor / Triode bypass.
  • Compressor Stereo Linking On/Off.

Beta Caveats / Bugs :

  • Not all parameters are suitable for automation.
  • In some cases, when CPU load is high, switching on/off checkboxes and a StringListBox parameter, (for instance, Oversampling, GR Routing) may not be taken into account. If that happens, Please stop audio processing and change the parameter state, and then resume audio processing.

Tests :

To ensure a quality product, we will try to test the plugin in the most varied configurations, including a wide variety of OS and Hosts configurations.
Most of the tests to date have been done on Renoise 3.41

We also plan to enroll users for beta testing. Nothing is better than field experience.

Release Date :
To be announced,
Probably June to August 2023.

Minimum OS/Software requirements :

  • Windows 8 to 11.
  • x64 OS.
  • A VST3 compatible host.

What about Mac users?

We envision a plugin for Mac users, but we need more time to port some functions.

Will there be a Demo version?

There will be a demo version available on our portal soon, to start the beta field testing phase. For now, there is a youtube video with a demo of the product for waveshaping applications.

Some Screencaps :

Triode operation, Default Settings, Harmonic content.
Triode operation while drawing grid current
WaveShaping operation.

For a more dynamic presentation, check the video :

Thermionic VST3 Demo Video

The knee-breaker

Analog compressor designed from scratch

NOTE: the download link is at the end of this article.

NOTE : A VST3 plugin was made implementing the knee management part of this compressor, more info here :
https://www.skynext.tech/index.php/2023/05/31/thermionic-vst3

This post demonstrates a VCA-based compressor with a variable knee.

The methodology used to design this compressor was to draw no inspiration from existing VCA compressor design and rather start from compressor theory alone.
The fact that I had no previous exposure to VCA designs and started with no previous experience in analog compressor design is a double-edged sword.
On the one hand, it could end up with novel and unexplored means of achieving certain effects and sound coloration,
On the other hand, it could lead to questionable choices or overly complicated designs.
There is also a big risk of getting a design that works well “in silicon”, that is, running well in a SPICE simulator, but would offer subpar performance in any realistic
implementation. I am mainly thinking about SNR characteristics.

An understanding of theory is indeed required to build a compressor that achieves its intended goal, dynamic range compression.
The following paper is a good starting point :


[1] “Digital Dynamic Range Compressor Design— A Tutorial and Analysis” from Giannoulis, Massberg, and Reiss.

https://www.eecs.qmul.ac.uk/~josh/documents/2012/GiannoulisMassbergReiss-dynamicrangecompression-JAES2012.pdf

Our design however diverges quite fast from the standard gain computer architectures provided in the paper.
Figure 7 of [1] introduces the basic configurations of gain computers and sidechain detectors.

Our design is a variant of the log domain detector, (7.c), in which the level detector comes before the gain computer.
Since our level detector and A/R stages work on a log signal, and our A/R stages, being RC single pole filters exhibit exp. behavior in transient response,
which means that the overall A/R envelope with respect to the linear domain is also linear (log and exp cancel out to give a linear envelope).

This is one thing.

The next divergence from a classic compressor is the knee implementation.
Here we embarked in a really experimental direction.
Usually, the knee is an amplitude band of the signal, in which the gain computer does not apply the full ratio. This bandwidth is called the kneewidth, centered around the threshold.
At threshold – knee, the compressor applies a unity ratio, which means no compression.
at threshold + knee, the compressor applies full ratio, that is the set compression.

The goal of the knee is to provide a smooth transition between these two extremes.
When using high ratios which get close to limiting range, that is particularly useful.

Figure (4) of [1] shows the piecewise definition of the gain computer.

  • Yg is the output sample
  • Xg is the input sample
  • T is the threshold
  • W is the kneewidth

$$ 2(X_{g} -T) < \frac{W}{2} \Rightarrow X_{g} $$

$$ \left |2(X_{g} -T) \right | \leqslant W \Rightarrow X_{g} + \frac{(\frac{1}{R} – 1)(X_{g} -T +\frac{W}{2})^{2}}{2W} $$

$$ 2(X_{g} -T) > W \Rightarrow T + \frac{X_{g} -T}{R} $$

(1) (2) and (3) form the piecewise definition of gain computer function.

This function in the knee zone uses a quadratic function to make the junction smooth.
Making a smooth junction between two secant lines is a Bézier curve.

Our approach is experimental in the sense that it uses :
the sidechain signal bounded between -W/2 and W/2 as the ratio modulator, further, it applies an attack/release to that bounded signal.
To achieve proper results that A/R settings should not diverge much from the main peak detector A/R, or the time constants will lose correlation and the knee may be applied outside of the knee range or not at all. (ganged potentiometers would be required).

Furthermore, the processed knee control signal after the A/R range is passed through a tanh() function cell using a BJT differential pair.
The resulting control signal does not guarantee smooth branching between the two gain lines and may induce undershoot/overshoot.

We expect it to introduce distortion in some cases and make the knee setting a complex task, however, it is interesting from an FX compressor point of view.

Most of the circuit complexity is found in the processing of the tanh cell signal to normalize it so the compressor acts as a compressor and does not expand the signal,
and also provides no more than compression at ratio levels at the upper exit of the knee width zone.

Description of the knee control signal path.

As said before, we clip the log detector signal centered around the threshold between -w/2 and w/2.
Then we apply a similar A/R envelope. Then we use U22 opamp to control the gain of the input signal to the tanh() cell. A stronger signal will result in a tanh output being more “square”

The output of the tanh() cell is then passed to a normalizer using a VCA824 (U41) that ensures the signal “tanh_norm” is constrained : “vratio_buf” < “tanh_norm” < 0V )
This assumes that the previous tanh() cell signal is well behaved : -0.5V < “tanh_out” < 0.5V
if not, the following clamping stage will take care of unwanted excursions.

For normalization, we use a VCA824 IC to perform that function. This VCA is a linear domain VCA. It is designed for HF but should work well at audio frequencies.
However, it would incur some additional costs in the final product. There is also the issue of input/output voltage offsets that require calibration.

Then we have a clamping stage for supplementary protection :

This clamps “tanh_out_gain_inv” output positive excursions to the “- vratio” level (positive value).
positive excursions should not happen unless the tanh() cell current sink is above 247µA and the signal entering the tanh() cell makes the cell saturate.
If that were to happen, “tanh_out_gain_inv” would go above “-vratio”.

Finally, We added a bit of extra precaution so that gain voltage “tanh_out_clamp2_clip_buf” fed to the U23 VCA824 never goes above 0V.
tanh() normalization and clamping action is not perfect, as well as U41 VCA824 always has some offset despite calibration, so we clip above 0V here.

The result is our modulated ratio gain signal. We use another VCA824 (U23) to apply this ratio to gain_before_ratio_inv as a part of the gain computer architecture.
The rest of the gain computer is a standard implementation.

Before the gain control signal is sent to the THAT2180 IC, it needs to be normalized so as to apply a 20(log10(x)) transfer function.
This allows a simpler calibration when doing measurements.

A gain correction factor is done by U37, taking into account the gain factor of the THAT2180 and the transfer function of the diode linear to log signal converter that is situated at the start of the sidechain, after the full wave rectifier.

We use a high current, low output impedance op-amp for U37, as the THAT2180 requires a low impedance gain control signal.

Note that the threshold stage setting also uses a linear-to-log converter instead of a log potentiometer. The idea was to thermally couple the matched diodes D4 and D8 so that when there are temperature fluctuations, the threshold does not change much.

Use of the model :

Set the parameters.

“threshold” at 1 is -oo dB while 0 is 0dB
“kneewidth” close to 1 is 0dB while 0 is +inf dB
“ratio” is the compressor ratio.
“attack” and “release” are resistance values of the attack/release stages.
Higher attack resistance value -> faster attack (attack capacitor charges quicker)
Higher release resistance value -> slower release (release capacitor discharges slower)

Choose the input source signal

Edit the B1 voltage source and specify the voltage source to use. the “1V” voltage source uses a low frequency (20 Hz signal) that allows seeing the action of the attack and release settings. there are also pulse/square waveforms, a triangle, and one using a wave file as input, do not forget to set the input and output waveform filenames to your requirements.

Load the waveform settings.

There are three waveform settings :

  • compressor_v0.9_monitor.plt: to compare input signal with compressor output.
  • compressor_v0.9_tanh.plt: to inspect the knee computer.
  • compressor_v0.9_gain_computer.plt: to inspect the gain computer.

Caveats / possible improvements :

  • The input AC coupling to both the sidechain and THAT2180 is sketchy. it could generate phase problems.
  • There is no makeup gain in this circuit.
  • There is no balanced input to unbalanced converters, nor unbalanced to balanced output.

However, adding these would slow even more the simulation which is already really slow on a dual Xeon E5-2430v2 ( between 50 and 100µs/s) When using a wave file.
Expect 2.75 hours for 1 sec of simulation at this rate.
However, using standard test bench inputs gets a much higher simulation rate. (in the 2 to 3 ms/s range)

  • The knee control signal (between U33 and U46 stages) is a bipolar signal, which means we have to discard the diode between this stage, to allow the signal negative excursions, This stage is no more a decoupled attack/release stage but simply a first-order LP filter. The time constant however is determined by the {release} parameter. and is the same as the A/R signal stage in our model.
  • The two parallel paths with different A/R parameters (one subjected to a full-fledged A/R while the other to a simple LP filter, plus the different amplitudes due to the knee width selection give overall a different signal profile at the attack vs at the release, even when using same resistor values for attack and release resistors. this gives rise while using certain knee settings to a different effective threshold at attack vs release. The symmetrical wave shape subject to this won’t be symmetrical anymore.
  • Wide knee widths give wide knee width signal amplitudes between the two clip values (-w/2 and w/2). In the case of small widths, the resulting signal fed to the tanh_out stage may not be sufficient to drive it to saturation. A better implementation would normalize this signal based on kneewidth. For now, one has to boost U22 gain for small widths to ensure saturation. Failure to do so would make the ratio stage fail to reach its target ratio.
  • Control of tanh() saturation by means of U22/R65 has a large influence on the application speed of the final ratio, which means its effect may “take over” the attack release characteristics of the classical A/R settings.

As a conclusion, using a tanh() cell was a questionable design choice if ease of use is a prime concern, but introduces interesting waveshaping effects (if applied to an instrument, with very fast attacks and releases)

it also allows knees ‘harder’ than no “knee”. The effect can be sought as a transition from a unity ratio to a ratio more than intended before a gradual recovery to the desired ratio. The precise reason for this behavior would require an in-depth analysis of the simulation and also a practical simulation to see if that can be reproduced. It could be useful to limit fast transients since at the crossing of the threshold a super hard knee would help to bring them in check before reverting to the desired ratio

I am posting the asc without the additional required components for copyright reasons.

Simulation Data

We will now show the compressor behavior before a full calibration is done on the output gain stage op-amp final amplifier, using theoretical values.

Figure 1 : This first plot shows a step of the parameter knee width with constant tanh() gain. for smaller widths, the tanh() cell fails to saturate and the full ratio is not applied. resulting in less compression. For larger widths, compression starts earlier in our design because the main gain computer applies the A/R envelope at (threshold – kneewidth/2), to this the distortion effects induced by the tanh() cell are compounded. overall it achieves more compression at larger widths. which is quite paradoxical.

Figure 1 : stepping the knee width, R65=2k

Figure 2 : The second plot uses the same parameter stepping with a higher input gain to the tanh() cell, setting resistor R65 to 10k. The overall gain response is more or less the same for small knee widths, while larger knee widths exhibit a dramatic distortion effect.

Figure 2 : stepping the knee width, R65=10k

Figure 3, We stepped the tanh() cell input gain resistor. the knee inversion from soft to “hardcore” is clearly visible

Figure 3 : stepping tanh() cell input gain

Figure 4 : Zooming in on the previous figure to show knee inversion clearly :

Figure 4 : Effect of tanh() cell input gain. We can see knee inversions but also soft knees.

Figure 5 : We step the ratio from 1 to 10 by increments of 1 to show a more classic compressor behavior. The R65 resistor is set at 500 so the tanh() cell is not saturated. Gain reduction seems to hit a hard limit before reaching to effective ratio of 10. The compressor shows and effective ratio of 1.56 where it should be 2, 2.10 where it should be 6, and 2.23 where it should be 10. The gain computer works on a logarithmic signal, but the THAT2180 IC is an exponential control IC, that is, it expects dB input, so the result should be linear. We will have to investigate this. Part of the problem is due to the tanh() cell not being saturated but that does not explain such low effective ratios.

Figure 5 : Effect of ratio on signal, unsaturated tanh() cell

Figure 6 : Same problem. We will have to check the R26 value that sets the U37 gain used in the final gain normalization stage. Also maybe a diode voltage drop was not accounted for. in the A/R stage. The best would be to set an acceptable knee width (not too small to prevent inversions) and not too big to lower the ratio, Set a high ratio, and perform calibration by selecting a better value for R26 and then re-check compression for a ratio of 2. Fortunately, unit ratio results in a more or less identical signal which means that there is probably not any unaccounted offset in the gain/knee computers.

Figure 7: Effect of ratio on compression using R65 = 10k

Now in Figure 8, we see an adequate ratio using a knee of 0.8 but at the expense of a very hard knee and signal distortion, because of that, the signal appears to have high compression because of that offset. In reality, it is more like a higher compression obtained through lowering the threshold (which the knee setting does)

Figure 8. effect of ratio on compression. Kneewidth setting 0.8

In the end, the sensible solution was to recalibrate the final gain normalization op-amp, at the maximum rated compression level, let’s a ratio of 20, so that the (linear) ratio of the output of the non-processed signal above the threshold to the output of the processed signal above the threshold is roughly equal to 20, This is done while setting other parameters to midrange values the threshold to -6dB. This was done with the knee-width computer disconnected, and R55 connected to ground instead. In the end, we set the R26 resistor value to 35k instead of 20.8k for theory.

Next, We set the ratio back to 1 and reconnect the knee width computer since he is responsible for applying the ratio. We check that both amplitudes of the input and output signal match. The small discrepancies are probably from the VCA824’s small remaining offsets.

The process would then need to compute the ratios at step intervals and mark the potentiometer accordingly.

Figure 9: It’s much better now ! The potentiometer pot also has been replaced by an exponential pot to allow finer resolution for low profiles.

Figure 9. Effect of ratio on compression after calibration and using an exponential tap pot with linear stepping

This is the model for the exponential pot that I used :

.SUBCKT exppot 1 2 3 T=0.5
.param tap={limit(T,1m,.999)}
.param stiff={limit(stiff,1,10000)}

R0 1 3 {R*exp(-tap*ln(R*stiff))}
R1 3 2 {R*(1-exp(-tap*ln(R*stiff)))}
.ENDS

Figure 10 : For the sake of completeness, we will now step the threshold, setting the effective ratio near 4.

Figure 10 : Effect of threshold on compression after calibration and using an exponential tap pot with linear stepping of the tap

Model Download

You need these models in your library to make the simulation run properly :

  • TL072
  • LT6244 (should be included in Ltspice, check for updates if not)
  • LT1217 (should be included in Ltspice, check for updates if not)
  • THAT2180
  • VCA824

Single phase Inverter synchronization to mains using time continuous phase angle approximation with analog components

For impatient visitors, the LTspice model download is at the bottom of this post.

In our previous post we discussed the method that uses ZCD + flip-flops to extract the phase angle (angle of synchronism) using pulses whose duty cycle is proportional to the phase angle, and with a pulsing frequency of 2*f_ac, f_ac being the working frequency of the mains (grid) and inverter. Although this method is robust in the case of voltage variations, feeding pulses to our control loop required a more agressive low pass filtering strategy, and has a low gain at minimal phases angles, Overall it makes the control loop tuning harder.

So we will propose now a time continuous analog estimation of phase angle. It closely resembles to the single multiplier phase detector in the shape of the output, but does not involve a multiplier. This method is projected to be significantly more sensitive to voltage swells/sags and transients or voltage imbalances between the mains and the inverter, as it is the case for most phase detectors used in PLL. So it will involve signal normalization as well. We will try to characterize the performance of this method compared to the classic mutiplier based phase detector. Same as in the previous post, here we are discussing of synchronized inverters, not grid tied ones. As such, these inverters, perform voltage control independently of the grid conditions, that is one of the main benefits of the double conversion (online) topology, that always supplies power coming from the inverter stage at a stable regulated voltage while the grid voltage may fluctuate. On the other hand, line interactive or offline UPS perform AVR only using an autotransformer with taps to buck or boost a voltage by fixed increments. Since we have a potentially fluctuating grid voltage due to external conditions and a UPS voltage regulated at a nominal value, (not taking into account voltage fluctuations due to regulation inertia), it is important to characterize the sensitivity to voltage imbalances of the following method to assess its viability for the purpose of inverter phase synchronization.

Principle of operation

Instead of supplying the control loop a pulse whose duty cycle is proportional to the phase angle, with a postive pulse for positive phase angles and a negative pulse for negative phase angles. We supply the control loop the differential signal of V_mains(t) and V_inverter(t). That is V_mains(t) – V_inverter(t), after scaling the source signal to a level compatible with op-amps. Although it works to extract the absolute phase angle, assuming that the two voltages are of the same amplitude, preserving the lagging/leading information, that is the sign of the phase angle, requires careful processing of that signal.

Assuming a constant phase angle different than 0° and that the amplitudes of V_mains(t) and V_inverter(t) are the same,

We can see that the V_mains(t) – V_inverter(t) changes sign when V_mains(t) = V_inverter(t), although the lagging/leading status is still the same. That is why we need to switch the V_mains(t) – V_inverter(t) signal to -(V_mains(t) – V_inverter(t)) when V_mains(t) = V_inverter(t), to preserve lagging/leading information.

To encode the instant where V_mains(t) = V_inverter (t) using a basic sine to square circuit, we will feed the scaled down sum signal, (labeled ‘sum‘ in the schematic) V_mains(t) + V_inverter(t) to a comparator to get a square wave signal. The rising edge will happen at zero crossing going upwards of V_mains(t) + V_inverter(t), The falling edge at zero crossing going downwards. The points where V_mains(t) = V_inverter(t) will sit firmly at the middle of each HIGH or LOW levels time intervals. The resulting square wave signal is labeled ‘sum_sq’ in the LTspice model.

To establish a processing logic, We will also need to convert the difference signal, labeled ‘difference’ in the schematic into its corresponding square wave signal. This resulting signal is labeled ‘difference_sq‘ in the LTspice model. Note that the difference_sq signal switches polarity, that is, goes from RISE to FALL or vice versa at the points where V_mains(t) = V_inverter(t). More precisely, it is rising at V_mains(t) = V_inverter(t) when both V_mains(t) and V_inverter(t) are positive, and falling at V_mains(t) = V_inverter(t) when both V_mains(t) and V_inverter(t) are negative.

We used the LT1716 comparators for the ZCD sine to square conversion. It also conditions the square signals to 5V logic levels. It is tolerant to an input going down to -5V in relation to negative rail, here GND, while still outputing a valid 0V output in this case. This information is available in the datasheet.

Next we will establish a truth table for the above two signals.

TRUTH TABLEdifference_sq RISEdifference_sq FALL
sum_sq HIGH10
sum_sq LOW01
D flip-flop truth table

Note that we compare an edge signal to a level signal, for this edge triggered logic, a D type flip-flop comes handy. You may also ask why we need this convoluted logic, well it is necessary in order to preserve the leading/lagging information. In order to do that, we will need an additional logic stage between the above resulting signal, labeled ‘dflop‘ in the model, and the difference_sq signal. This time both signals are levels, so to establish the following truth table we will simply use a XOR gate.

TRUTH TABLEdifference_sq HIGHdifference_sq LOW
dflop HIGH10
dflop LOW01
XOR gate

The resulting signal will condition the state of the SPDT switch IC, the ADG333A IC is suitable for this application. The silicon SPDT switch will switch the output between $$ difference $$ and $$ \overline{difference} $$ input signals.

And that’s how we get an approximation of the phase angle, preserving the leading/lagging information. Note that the logic signal coming into the silicon SPDT switch not only has the result of switching polarity of the difference signal when the phase angle goes from leading to lagging and vice versa, but also performs rectification of the difference signal.

To better illustrate the action of the whole signal conditioning logic, we provide the following screen capture :

phase angle between inverter and mains oscillates between -90° and + 90° centered around 0°
Logic of the continous phase angle approximation signal conditioning block

Now that we have our proper phase angle approximation signal, it is time to feed it to the control loop.

Remember from our previous post that, assuming same frequency and voltage for both signals, and a constant phase shift or a phase variation frequency that is negligible compared to f_ac :

$$ (1)\hspace{1cm} \left | \Delta \varphi \right | = 2arcsin(peak( \frac{\left |V_{mains}(t) – V_{inverter}(t)\right |}{2V_{max}} )) $$

with peak() defined as the function that returns the peak value as a step function over the time range of interest defined below.

Note that for : $$ (2)\hspace{1cm} \left | \Delta \varphi \right | \ll \pi $$

$$ (3)\hspace{1cm} \left | \Delta \varphi \right | \approx peak( \frac{\left |V_{mains}(t) – V_{inverter}(t)\right |}{V_{max}} ) $$

Being sinusoidal in nature, it follows that for a time interval $$ (4)\hspace{1cm} \left [ t_{1} , t_{1} + \frac{1}{2f_{ac}} \right ] $$ or multiple thereof,

(3) is a linear relationship because $$ (5)\hspace{1cm} peak(k\times a(t)) = k\times peak(a(t)) $$ provided that (2) is true. Note that for the ZCD discrete phase angle method of our previous post, there is a linear relationship over the whole [ -pi , pi ] domain.

The main difference then lies into the LP stage filtering response of our control loop between a variable duty cycle bipolar square wave signal with 2*f_ac frequency and a bipolar sinusoidal signal with rectified sine harmonics at 2*f_ac frequency.

Phase angle control loop

We reused at first the phase control loop from our earlier post design :

https://www.skynext.tech/index.php/2023/01/16/single-phase-inverter-synchronization-to-mains-phase-using-the-zero-crossing-method-and-proportional-derivative-control-with-analog-components/

Since this post, it has been updated with an additional integral term to get a PID control loop.

This loop already gave relatively good results (phase angle < 0.75° for most disturbances in our simulation bench). We used it too gather data in the new phase continuous model as a reference for improvement.

Then, we optimized the loop design to get a better phase response. For this, we got rid of the butterworth filters after the integral and derivative stages, as well as a tuning of the integral cutoff frequency and the derivative peak response frequency. We will post both results here as well as the Bode plot of the new control loop.

Voltage imbalance sensitivity

Voltage amplitude imbalance between mains_scaled and inverter_scaled has effects on the diff_out signal that are mostly characterized by a reduced sensitivity to small phase angles. The signal shows a larger DC bias, which swamps the response to angle variations.

The leading/lagging transition response seems less affected, the system being able to detect the transition in small phase angle oscillations, even in the presence of a moderate voltage imbalance.

Let’s discuss the possible mitigation strategies of the voltage imbalance sensitivity.

For the purpose of phase synchronization outlined above, the inputs of the control system are :

  • Inverter voltage sensing coming from an isolation transformer on the output of the inverter.
  • Mains/grid voltage sensing coming from another isolation transformer

Both of these inputs could also be used for voltage (amplitude) sensing. Inverter voltage sensing is already used for inverter voltage (feedback) regulation. If we wish to compensate the voltage imbalance for phase synchronization, we may need to sense both.

Voltage amplitude sensing methods usually implement peak detection using smoothing capacitors and a full bridge rectifier.

Inverter voltage is dictated by the inner voltage/current control loop and possibly an outer control loop. It is subject to a certain amount of inertia. Moreover, set/regulated voltage may well be different than the nominal 240V AC.

Mains/grid voltage is dictated by the grid. We also have to take into account the serial impendance of the transmission line and that of the 10kV/240V utility transformer. These will produce a voltage drop dependent on the load, and account for a large portion of voltage variation during the day.

If, for whatever reason we wish to implement the proposed method above we would need to get rid of the voltage amplitude difference.

  • Either we establish the mains voltage as a reference, and make the inverter follow it, by controlling the amplitude of the independently generated sine wave reference of the SPWM modulator. In that case, it defeats one of the main purpose of an inverter, specially for online (double-conversion) UPS, which is voltage stability independently of the grid.
  • We could also use the mains voltage as a reference to the full extent, after scaling it down, by using the mains voltage waveform as the sine wave reference used in the SPWM generation, in that case, the inverter also follows frequency and phase of the grid as a bonus, which render the whole synchronization issue of the present article moot. The downside is that the inverter output is now subject to all disturbances of the grid, including transients, noise, etc… if adequate filtering is not provided. The inverter now works as a class-D amplifier.
  • Third option, we establish inverter voltage as a reference, and make the mains (scaled down voltage input) follow the inverter voltage in terms of amplitude. Since we have no control on the voltage from the grid, the only method that seem plausible would be to perform AGC (automatic gain control) on the sensed mains voltage to make it follow the sensed inverter voltage.

The later is not without problems though. We predict that there may be quite a high amount of crossover interaction between the phase/frequency control loop and the voltage/current control loop, making tuning of both difficult. Let’s try nevertheless.

Implementing AGC on mains voltage sensing

Since an inverter voltage control loop usually implements voltage sensing for its output using a peak detector (with attack/release control), And that doing the same for the mains voltage is also usually a requirement, for instance, to detect voltage sags/swell that go beyond the AVR capability, or simply for mains blackout detection, it seems that it would not cost much to at least try to implement an AGC for the goal of phase angle synchronization using the peak detectors outputs as differential inputs to generate a voltage control signal based on the voltage error that will be subsequently applied to a VCA. The VCA will perform AGC on the scaled mains voltage signal to keep it at the same amplitude that of the scaled inverter voltage. Then phase angle measurement can be performed without worry about the effect of amplitude imbalance.

The VCA would not need to have fancy requirements. It does not need high bandwidth, since it will work on a 50 Hz signal. It does not need high dynamic range, since it will operate on a mains voltage plus/minus 20% (worst case scenario) deviation from the nominal 240V AC. (Mains voltage is required in Europe to stay in the plus/minus 10% range from the nominal 240V AC.)

However, It would preferably use linear voltage control of the gain. That is to ease the loop design and tuning.

Voltage transient filtering (or what remains of it after the TVS upstream) could be achieved by tuning the attack potentiometer of the peak detector stage. However a compromise should be found between a good transient response and a good voltage following response so as not to introduce too much delay. This is not an easy task.

Given the requirements, a TI VCA824 IC seems a good choice. Other options although not tested would be to use an OTA like the LM13700, Finally we could also use an audio VCA like the THAT 2180x series, but it also OTA-like since it sources/sink current at the output, so either a resistor or better a current to voltage op-amp block is needed at the output. However the THAT 2180x is an exponential (dB/V) voltage controlled device, Whereas the VCA824 IC is linear (V/V). An advantage of the THAT 2180x is that it features a 0dB gain at 0V center point. It is not the case for the VCA824, Where the unity gain is closest to 0V gain control for a 2V/V max gain setup (dictated by the Rf/Rg feedback resistor setup). Even with a 2V/V gain setup the unity gain point is not exactly at 0V (at least in our setup). But this is not that much of a problem since there is a control loop for amplitude that takes care of it. Other issue encountered with the VCA824 IC is that we had to correct input and output offset voltages using voltage dividers at the signal input and output as shown in the datasheet. Using AC coupling for that purpose is a big no no since it would introduce delay. Finally, there is the cost issue. VCA824 is expensive and its features underutilized since it is tailored for HF/VHF use. But it works well for VLF like 50 Hz too. Finally, there is the issue of dynamic range. VCA824 can’t take much more than ± 1.6V at input, and goes sensibly lower than ± Vs for the output. Here Vs is ±5V (rail to rail) and this is the max for safe operation. To get some operational margin for voltage sags and swells, we setup max gain at 3V/V, and the whole setup works so as to obtain a normalized 1V amplitude mains signal, whatever the voltage sag/swell condition is. We expect the setup to be more sensitive to noise because of the reduced signal amplitude that is fed to the continuous phase angle measurement logic.

Amplitude control loop to get a normalized mains signal

Amplitude disturbance

For now, we only considered single tone FM disturbance of mains grid voltage. We still have to tackle amplitude disturbance like fast voltage transients with a clamped profile (from the TVS action), temporary overvoltages/undervoltages (from load rejection / load connection events in a generator setup), and slow voltage daily/hourly variations due to load profile change across several utility subscribers sharing a 11kV/230kV transformer.

First we will test the performance with a static voltage deviation from nominal 230V and see how the AGC performs, and how the whole loop behaves.

<to be continued>

Harmonics Disturbance

This is the hard part. It is expected that with a good voltage following characteristic, the whole loop would also somewhat track harmonics from the grid. Our goal would be to track the phase and frequency of the fundamental, not the harmonics ladden signal. We could think that filtering the mains signal would be a good idea, However we would need a really flat phase response (like those of Bessel filters), and even with that, we would need to compensate the delay with something like an all pass filter tuned to bring the 50Hz signal to a 360 phase or multiple thereof. That would introduce phase problems for frequencies other than 50Hz. Moreover, Bessel response is inadequate to filter the third harmonic sufficiently since it is so close to the fundamental. We could use a Butterworth LP filter but phase response issues would be even worse with each increasing order. We could think of a really good rejection of harmonics with a resonant filter, but that would be the absolute worse of the worse in terms of phase issues. Harmonics rejection is at the current state of analog filter technology an intractable issue in our opinion and would be better tackled in the Z-domain. Comment if you disagree.

Nevertheless, we added a harmonics disturbance setup in our model with 3rd,5th,7th,9th and 11th harmonics setup with amplitude (in % of fundamental amplitude) and phase (for each harmonic) to characterize the performance. At this point, the equation (3) is unsuitable, unless we compare the output sine reference to the fundamental of the harmonics disturbed signal.

Simulation Model

The simulation model includes the ASC Ltspice file with all packages dependencies (asy,sub,lib) in the same folder. There should be no need to tweak inside the file for absolute paths as they have been removed. No non-standard diodes, fet, bjt are used so there should be no need to add lines in the respective files (such as standard.dio or standard.bjt)

This model only models the PLL, not the full inverter. It’s goal is to generate a synchronized sine reference from mains voltage, and be tolerant to voltage sags/swells, frequency variation as well as harmonics. Harmonics should be rejected in the sine reference as much as possible.

Recently I added a block to simulate ADC operation with with sample time quantization and amplitude quantization to more accurately simulate an AD7366 ADC.

It includes a test bench block to simulate :

  • Amplitude disturbance
  • Frequency disturbance
  • Harmonics disturbance
  • Initial phase angle

It also includes the PLT files for plotting.

More information available in ____README____.txt inside the zip archive.

Have a nice day !