Filtreler
Richards’s curve induced Banach space valued multivariate neural network approximation

SEDA KARATEKE

Makale | 2022 | Arabian Journal of Mathematics

Here, we present multivariate quantitative approximations of Banach space valued continuous multivariate functions on a box or RN , N ∈ N, by the multivariate normalized, quasi-interpolation, Kantorovichtype and quadrature-type neural network operators. We examine also the case of approximation by iterated operators of the last four types. These approximations are achieved by establishing multidimensional Jackson type inequalities involving the multivariate modulus of continuity of the engaged function or its high-order Fréchet derivatives. Our multivariate operators are defined using a multidimensional density function induced by t . . .he Richards’s curve, which is a generalized logistic function. The approximations are pointwise, uniform and L p. The related feed-forward neural network is with one hidden layer Daha fazlası Daha az

2D Vector Representation of Binomial Hierarchical Tree Items

METİN ZONTUL | SEDA KARATEKE

Makale | 2022 | Conference Proceedings

Today Artificial Intelligence (AI) algorithms need to represent different kinds of input items in numeric or vector format. Some input data can easily be transformed to numeric or vector format but the structure of some special data prevents direct and easy transformation. For instance, we can represent air condition using humidity, pressure, and temperature values with a vector that has three features and we can understand the similarity of two different air measurements using cosine-similarity of two vectors. But if we are dealing with a general ontology tree, which has elements "entity"as the root element, its two children "livin . . .g things"and "non-living things"as first- level elements repeatedly children of "living things"that are "Animals", "Plants"as second level elements, it is harder to represent this kind of data with numeric values. The ontology tree starts from the general items and goes to specific items. If we want to represent an element of this tree with a vector; how can it be possible? And if we want the measured similarity using some methods like cosine-similarity, which one similarity is higher, ("Animal"and "non-living thing") or ("Animal"and "Living thing")? How should we select the values of this vector for each item of the hierarchical tree? In this paper, we propose an original and basic idea to represent the hierarchical tree items with 2D vectors and in the proposed method the cosine-similarity metric works for measuring the semantic similarity of represented items at the same level as parent items. There are two important results related to our representation: (1) The "y"values of the items give the hierarchical level of the item. (2) For the same level items, the cosine similarities between the parent item and child items are higher if the child belongs to this parent compared to other childrens'. In other words, the cosine similarity between the parent item and child items is highest if the child belongs to this parent. © 2022 IEEE Daha fazlası Daha az

Parametrized hyperbolic tangent based Banach space valued multivariate multi layer neural network approximations

SEDA KARATEKE

Makale | 2023 | Source type , pp.490 - 519

Here we examine the multivariate quantitative approximations of Banach space valued continuous multivariate functions on a box or R N , N ∈ N, by the multivariate normalized, quasi-interpolation, Kantorovich type and quadrature type neural network operators. We research also the case of approximation by iterated operators of the last four types, that is multi hidden layer approximations. These approximations are achieved by establishing multidimensional Jackson type inequalities involving the multivariate modulus of continuity of the engaged function or its high order Fr´echet derivatives. Our multivariate operators are defined by u . . .sing a multidimensional density function induced by a parametrized hyperbolic tangent sigmoid function. The approximations are pointwise, uniform and Lp. The related feed-forward neural networks are with one or multi hidden layers Daha fazlası Daha az

Richards’s curve induced Banach space valued ordinary and fractional neural network approximation

SEDA KARATEKE

Makale | 2022 | Springer Link

Here we perform the univariate quantitative approximation, ordinary and fractional, of Banach space valued continuous functions on a compact interval or all the real line by quasi-interpolation Banach space valued neural network operators. These approximations are derived by establishing Jackson type inequalities involving the modulus of continuity of the engaged function or its Banach space valued high order derivative or fractional deriva- tives. Our operators are defined by using a density function generated by the Richards curve, which is generalized logistic function. The approximations are pointwise and of the uniform norm. Th . . .e related Banach space valued feed-forward neural networks are with one hidden layer Daha fazlası Daha az

6698 sayılı Kişisel Verilerin Korunması Kanunu kapsamında yükümlülüklerimiz ve çerez politikamız hakkında bilgi sahibi olmak için alttaki bağlantıyı kullanabilirsiniz.
Tamam

creativecommons
Bu site altında yer alan tüm kaynaklar Creative Commons Alıntı-GayriTicari-Türetilemez 4.0 Uluslararası Lisansı ile lisanslanmıştır.
Platforms