Chunking is considered as one of the very important problems in Natural Language Processing. The chunking models developed so far usually are unable to extract the relationships among the chunks. In this paper, we present a typed dependency-based chunking model (TDC), which is based on stanford typed dependencies, which attains the highest f-score. Besides attaining highest f-score, unique feature of the proposed chunking model is the extraction of semantic relationships among the chunks. Hence, TDC can easily be utilized for the tasks which require the semantics of the text. TDC is evaluated on the training and test sets provided for the CoNLL-2000 chunking shared task (Tjong et al., in: Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational Natural Language Learning, Association for Computational Linguistics, 2000). The results show that TDC achieves highest f-score than the top scoring model of CoNLL-2000 (shared task), and the models developed after the shared task.