# Sparse Coding

### From Ufldl

(→Probabilistic Interpretation [Based on Olshausen and Field 1996]) |
|||

Line 103: | Line 103: | ||

As described above, a significant limitation of sparse coding is that even after a set of basis vectors have been learnt, in order to "encode" a new data example, optimization must be performed to obtain the required coefficients. This significant "runtime" cost means that sparse coding is computationally expensive to implement even at test time especially compared to typical feedforward architectures. | As described above, a significant limitation of sparse coding is that even after a set of basis vectors have been learnt, in order to "encode" a new data example, optimization must be performed to obtain the required coefficients. This significant "runtime" cost means that sparse coding is computationally expensive to implement even at test time especially compared to typical feedforward architectures. | ||

+ | |||

+ | |||

+ | |||

+ | {{Languages|稀疏编码|中文}} |