Browse ORBi

- What it is and what it isn't
- Green Road / Gold Road?
- Ready to Publish. Now What?
- How can I support the OA movement?
- Where can I learn more?

ORBi

High Frequency Acoustic Scattering in Isogeometric Analysis ; ; Bordas, Stéphane Scientific Conference (2017, May 15) There is an emerging need to perform high frequency scattering analysis on high-fidelity models. Conventional Finite Element analysis suffers from irretrievable loss of the boundary accuracy as well as ... [more ▼] There is an emerging need to perform high frequency scattering analysis on high-fidelity models. Conventional Finite Element analysis suffers from irretrievable loss of the boundary accuracy as well as pollution error. Man-made geometries can be represented exactly in Isogeometric Analysis (IGA) with no geometrical loss even with very coarse mesh. The aim of this paper is to analyze the accuracy of IGA for exterior acoustic scattering problems. The numerical results show extremely low pollution error even for very high frequencies. [less ▲] Detailed reference viewed: 397 (7 UL)Accelerating Monte Carlo estimation with derivatives of high-level finite element models Hauseux, Paul ; Hale, Jack ; Bordas, Stéphane in Computer Methods in Applied Mechanics and Engineering (2017), 318 In this paper we demonstrate the ability of a derivative-driven Monte Carlo estimator to accelerate the propagation of uncertainty through two high-level non-linear finite element models. The use of ... [more ▼] In this paper we demonstrate the ability of a derivative-driven Monte Carlo estimator to accelerate the propagation of uncertainty through two high-level non-linear finite element models. The use of derivative information amounts to a correction to the standard Monte Carlo estimation procedure that reduces the variance under certain conditions. We express the finite element models in variational form using the high-level Unified Form Language (UFL). We derive the tangent linear model automatically from this high-level description and use it to efficiently calculate the required derivative information. To study the effectiveness of the derivative-driven method we consider two stochastic PDEs; a one- dimensional Burgers equation with stochastic viscosity and a three-dimensional geometrically non-linear Mooney-Rivlin hyperelastic equation with stochastic density and volumetric material parameter. Our results show that for these problems the first-order derivative-driven Monte Carlo method is around one order of magnitude faster than the standard Monte Carlo method and at the cost of only one extra tangent linear solution per estimation problem. We find similar trends when comparing with a modern non-intrusive multi-level polynomial chaos expansion method. We parallelise the task of the repeated forward model evaluations across a cluster using the ipyparallel and mpi4py software tools. A complete working example showing the solution of the stochastic viscous Burgers equation is included as supplementary material. [less ▲] Detailed reference viewed: 1863 (238 UL)Three-dimensional remeshed smoothed particle hydrodynamics for the simulation of isotropic turbulence Obeidat, Anas ; Bordas, Stéphane in International Journal for Numerical Methods in Fluids (2017) We present a remeshed particle-mesh method for the simulation of three-dimensional compressible turbulent flow. The method is related to the mesh free smoothed particle hydrodynamic (SPH) method, but the ... [more ▼] We present a remeshed particle-mesh method for the simulation of three-dimensional compressible turbulent flow. The method is related to the mesh free smoothed particle hydrodynamic (SPH) method, but the present method introduces a mesh for efficient calculation of the pressure gradient, and laminar and turbulent diffusion. In addition, the mesh is used to remesh (reorganise uniformly) the particles to ensure a regular particle distribution and convergence of the method. The accuracy of the presented methodology is tested for a number of benchmark problems involving two- and three-dimensional Taylor-Green flow, thin double shear layer, and three-dimensional isotropic turbulence. Two models were implemented, direct numerical simulations, and Smagorinsky model. Taking advantage of the Lagrangian advection, and the finite difference efficiency, the method is capable of providing quality simulations while maintaining its robustness and versatility [less ▲] Detailed reference viewed: 299 (32 UL)Modelling hydraulic fractures in porous media using flow cohesive interface elements ; ; et al in Engineering Geology (2017), 225 Detailed reference viewed: 143 (2 UL)Strain smoothed for compressible and nearly-incompressible finite elasticity ; ; Hale, Jack et al in Computers and Structures (2017), 182 We present a robust and efficient form of the smoothed finite element method (S-FEM) to simulate hyperelastic bodies with compressible and nearly-incompressible neo-Hookean behaviour. The resulting method ... [more ▼] We present a robust and efficient form of the smoothed finite element method (S-FEM) to simulate hyperelastic bodies with compressible and nearly-incompressible neo-Hookean behaviour. The resulting method is stable, free from volumetric locking and robust on highly distorted meshes. To ensure inf-sup stability of our method we add a cubic bubble function to each element. The weak form for the smoothed hyperelastic problem is derived analogously to that of smoothed linear elastic problem. Smoothed strains and smoothed deformation gradients are evaluated on sub-domains selected by either edge information (edge-based S-FEM, ES-FEM) or nodal information (node-based S-FEM, NS-FEM). Numerical examples are shown that demonstrate the efficiency and reliability of the proposed approach in the nearly-incompressible limit and on highly distorted meshes. We conclude that, strain smoothing is at least as accurate and stable, as the MINI element, for an equivalent problem size. [less ▲] Detailed reference viewed: 277 (30 UL)What makes Data Science different? A discussion involving Statistics2.0 and Computational Sciences ; Bordas, Stéphane E-print/Working paper (2017) Data Science is today one of the main buzzwords be it in business, industrial or academic settings. Machine learning, experimental design, data-driven modelling are all, undoubtedly, rising disciplines if ... [more ▼] Data Science is today one of the main buzzwords be it in business, industrial or academic settings. Machine learning, experimental design, data-driven modelling are all, undoubtedly, rising disciplines if one goes by the soaring number of research papers and patents appearing each year. The prospect of becoming a ``Data Scientist'' appeals to many. A discussion panel organised as part of the European Data Science Conference (European Association for Data Science (EuADS)) asked the question: ``What makes Data Science different?'' In this paper we give our own, personal and multi-facetted view on this question, from an engineering and a statistics perspective. In particular, we compare Data Science to Statistics and discuss the connection between Data Science and Computational Science. [less ▲] Detailed reference viewed: 983 (36 UL)Reduced basis Nitsche-based domain decomposition: a biomedical application Baroli, Davide ; Beex, Lars ; Hale, Jack et al Scientific Conference (2017, March 10) Nowadays, the personalized biomedical simulations demand real-time efficient and reliable method to alleviate the computational complexity of high-fidelity simulation. In such applications, the necessity ... [more ▼] Nowadays, the personalized biomedical simulations demand real-time efficient and reliable method to alleviate the computational complexity of high-fidelity simulation. In such applications, the necessity of solving different substructure, e.g. tissues or organs, with different numbers of the degrees of freedom and of coupling the reduced order spaces for each substructure poses a challenge in the on-fly simulation. In this talk, this challenge is taken into account employing the Nitsche-based domain decomposition technique inside the reduced order model [1]. This technique with respect to other domain decomposition approach allows obtaining a solution with the same accuracy of underlying finite element formulation and to flexibly treat interface with non-matching mesh. The robustness of the coupling is determined by the penalty coefficients that is chosen using ghost penalty technique [2]. Furthermore, to reduce the computational complexity of the on-fly assembling it is employed the empirical interpolation approach proposed in [3]. The numerical tests, performed using FEniCS[4], petsc4py and slepc4py [5], shows the good performance of the method and the reduction of computation cost. [1] Baroli, D., Beex L. and Bordas, S. Reduced basis Nitsche-based domain decomposition. In preparation. [2] Burman, E., Claus, S., Hansbo, P., Larson, M. G., & Massing, A. (2015). CutFEM: Discretizing geometry and partial differential equations. International Journal for Numerical Methods in Engineering, 104(7), 472-501. [3] E. Schenone, E., Beex,L., Hale, J.S., Bordas S. Proper Orthogonal Decomposition with reduced integration method. Application to nonlinear problems. In preparation. [4] A. Logg, K.-A. Mardal, G. N. Wells et al. Automated Solution of Differential Equations by the Finite Element Method, Springer 2012. [5] L. Dalcin, P. Kler, R. Paz, and A. Cosimo, Parallel Distributed Computing using Python, Advances in Water Resources, 34(9):1124-1139, 2011. http://dx.doi.org/10.1016/j.advwatres.2011.04.013 [less ▲] Detailed reference viewed: 284 (10 UL)Uncertainty Quantification - Sensitivity Analysis / Biomechanics Hauseux, Paul ; Hale, Jack ; Bordas, Stéphane Presentation (2017, February) Detailed reference viewed: 132 (9 UL)Real-time Error Control for Surgical Simulation Bui, Huu Phuoc ; Tomar, Satyendra ; Bordas, Stéphane E-print/Working paper (2017) Real-time simulations are becoming increasingly common for various applications, from geometric design to medical simulation. Two of the main factors concurrently involved in defining the accuracy of ... [more ▼] Real-time simulations are becoming increasingly common for various applications, from geometric design to medical simulation. Two of the main factors concurrently involved in defining the accuracy of surgical simulations are: the modeling error and the discretization error. Most work in the area has been looking at the above sources of error as a compounded, lumped, overall error. Little or no work has been done to discriminate between modeling error (e.g. needle-tissue interaction, choice of constitutive models) and discretization error (use of approximation methods like FEM). However, it is impossible to validate the complete surgical simulation approach and, more importantly, to understand the sources of error, without evaluating both the discretization error and the modeling error. Our objective is thus to devise a robust and fast approach to measure the discretization error via a posteriori error estimates, which are then used for local remeshing in surgical simulations. To ensure that the approach can be used in clinical practice, the method should be robust enough to deal, as realistically as possible, with the interaction of surgical tools with the organ, and fast enough for real-time simulations. The approach should also lead to an improved convergence so that an economical mesh is obtained at each time step. The final goal is to achieve optimal convergence and the most economical mesh, which will be studied in our future work. [less ▲] Detailed reference viewed: 706 (27 UL)A fully smoothed XFEM for analysis of axisymmetric problems with weak discontinuities ; ; et al in International Journal for Numerical Methods in Engineering (2017), 110(3), 203-226 In this paper, we propose a fully smoothed extended finite element method (SmXFEM) for axisymmetric problems with weak discontinuities. The salient feature of the proposed approach is that all the terms ... [more ▼] In this paper, we propose a fully smoothed extended finite element method (SmXFEM) for axisymmetric problems with weak discontinuities. The salient feature of the proposed approach is that all the terms in the stiffness and mass matrixes can be computed by smoothing technique. This is accomplished by combining the Green’s divergence theorem with the evaluation of indefinite integral based on smoothing technique, which is used to transform the domain integral into boundary integral. The proposed technique completely eliminates the need for isoparametric mapping and the computing of Jacobian matrix even for the mass matrix. When employed over the enriched elements, the proposed technique does not require sub-triangulation for the purpose of numerical integration. The accuracy and convergence properties of the proposed technique are demonstrated with a few problems in elastostatics and elastodynamics with weak discontinuities. It can be seen that the proposed technique yields stable and accurate solutions and is less sensitive to mesh distortion. [less ▲] Detailed reference viewed: 136 (2 UL)Numerical evaluation of buckling behaviour induced by compression on patch-repaired composites ; ; Bordas, Stéphane et al in Composite Structures (2017), 168 A progressive damage model is proposed to predict buckling strengths and failure mechanisms for both symmetric and asymmetric patch repaired carbon-fibre reinforced laminates subjected to compression ... [more ▼] A progressive damage model is proposed to predict buckling strengths and failure mechanisms for both symmetric and asymmetric patch repaired carbon-fibre reinforced laminates subjected to compression without lateral restrains. Solid and cohesive elements are employed to discretize composite and adhesive layers, respectively. Coupling with three dimensional strain failure criteria, an energy-based crack band model is applied to address the softening behaviour in composites with mesh dependency elimination. Both laminar and laminate scaled failure are addressed. Patch debonding is simulated by the cohesive zone model with a trapezoidal traction–separation law applied for the ductile adhesive. Geometric imperfection is introduced into the nonlinear analysis by the first order linear buckling configuration. Regarding strengths and failure patterns, the simulation demonstrates an accurate and consistent prediction compared with experimental observations. Though shearing is the main contributor to damage initiation in adhesive, stress analysis shows that lateral deformation subsequently reverses the distribution of normal stresses which stimulates patch debonding at one of the repair sides. The influence of patch dimensions on strengths and failure mechanisms can be explained by stress distributions in adhesive and lateral deformation of repairs. Comparison between symmetric and asymmetric regarding strength and failure modes shows that structural asymmetry can intensify lateral flexibility. This resulted in earlier patch debonding and negative effects on strengths. [less ▲] Detailed reference viewed: 114 (2 UL)Guaranteed error bounds in homogenisation: an optimum stochastic approach to preserve the numerical separation of scales ; ; Bordas, Stéphane et al in International Journal for Numerical Methods in Engineering (2017), 110(2), 103132 This paper proposes a new methodology to guarantee the accuracy of the homogenisation schemes that are traditionally employed to approximate the solution of PDEs with random, fast evolving diffusion ... [more ▼] This paper proposes a new methodology to guarantee the accuracy of the homogenisation schemes that are traditionally employed to approximate the solution of PDEs with random, fast evolving diffusion coefficients. We typically consider linear elliptic diffusion problems in randomly packed particulate composites. Our work extends the pioneering work presented in [26,32] in order to bound the error in the expectation and second moment of quantities of interest, without ever solving the fine-scale, intractable stochastic problem. The most attractive feature of our approach is that the error bounds are computed without any integration of the fine-scale features. Our computations are purely macroscopic, deterministic, and remain tractable even for small scale ratios. The second contribution of the paper is an alternative derivation of modelling error bounds through the Prager-Synge hypercircle theorem. We show that this approach allows us to fully characterise and optimally tighten the interval in which predicted quantities of interest are guaranteed to lie. We interpret our optimum result as an extension of Reuss-Voigt approaches, which are classically used to estimate the homogenised diffusion coefficients of composites, to the estimation of macroscopic engineering quantities of interest. Finally, we make use of these derivations to obtain an efficient procedure for multiscale model verification and adaptation. [less ▲] Detailed reference viewed: 302 (20 UL)Real-time Error Control for Surgical Simulation ; ; et al in IEEE Transactions on Biomedical Engineering (2017) To present the first real-time a posteriori error-driven adaptive finite element approach for realtime simulation and to demonstrate the method on a needle insertion problem. Methods: We use corotational ... [more ▼] To present the first real-time a posteriori error-driven adaptive finite element approach for realtime simulation and to demonstrate the method on a needle insertion problem. Methods: We use corotational elasticity and a frictional needle/tissue interaction model. The problem is solved using finite elements within SOFA. The refinement strategy relies upon a hexahedron-based finite element method, combined with a posteriori error estimation driven local h-refinement, for simulating soft tissue deformation. Results: We control the local and global error level in the mechanical fields (e.g. displacement or stresses) during the simulation. We show the convergence of the algorithm on academic examples, and demonstrate its practical usability on a percutaneous procedure involving needle insertion in a liver. For the latter case, we compare the force displacement curves obtained from the proposed adaptive algorithm with that obtained from a uniform refinement approach. Conclusions: Error control guarantees that a tolerable error level is not exceeded during the simulations. Local mesh refinement accelerates simulations. Significance: Our work provides a first step to discriminate between discretization error and modeling error by providing a robust quantification of discretization error during simulations. [less ▲] Detailed reference viewed: 213 (6 UL)Error-controlled adaptive extended finite element method for 3D linear elastic crack propagation ; ; et al in Computer Methods in Applied Mechanics and Engineering (2017), 318 We present a simple error estimation and mesh adaptation approach for 3D linear elastic crack propagation simulations using the eXtended Finite Element Method (X-FEM). A global extended recovery technique ... [more ▼] We present a simple error estimation and mesh adaptation approach for 3D linear elastic crack propagation simulations using the eXtended Finite Element Method (X-FEM). A global extended recovery technique (Duflot and Bordas, 2008) is used to quantify the interpolation error. Based on this error distribution, four strategies relying on two different mesh optimality criteria are compared. The first aims at homogenizing the error distribution. The second minimizes the total number of elements given a target global error level. We study the behaviour of these criteria in the context of cracks treated by an X-FE approach. In particular, we investigate the convergence rates at the element-level depending its enrichment type. We conclude on the most suitable refinement criterion and propose and verify a strategy for mesh adaptation on 3D damage tolerance assessment problems. [less ▲] Detailed reference viewed: 132 (4 UL)Stable 3D XFEM/vector-level sets for non-planar 3D crack propagation and comparison of enrichment schemes Agathos, Konstantinos ; ; et al in International Journal for Numerical Methods in Engineering (2017) We present a three-dimensional (3D) vector level set method coupled to a recently developed stable extended finite element method (XFEM). We further investigate a new enrichment approach for XFEM adopting ... [more ▼] We present a three-dimensional (3D) vector level set method coupled to a recently developed stable extended finite element method (XFEM). We further investigate a new enrichment approach for XFEM adopting discontinuous linear enrichment functions in place of the asymptotic near-tip functions. Through the vector level set method, level set values for propagating cracks are obtained via simple geometrical operations, eliminating the need for solution of differential evolution equations. The first XFEM variant ensures optimal convergence rates by means of geometrical enrichment, i.e., the use of enriched elements in a fixed volume around the crack front, without giving rise to conditioning problems. The linear enrichment approach significantly simplifies implementation and reduces the computational cost associated with numerical integration. The two dicretization schemes are tested for different benchmark problems, and their combination to the vector level set method is verified for non-planar crack propagation problems. [less ▲] Detailed reference viewed: 362 (28 UL)An implicit potential method along with a meshless technique for incompressible fluid flows for regular and irregular geometries in 2D and 3D Bourantas, Georgios ; ; et al in Engineering Analysis with Boundary Elements (2017), 77 We present the Implicit Potential (IPOT) numerical scheme developed in the framework of meshless point collocation. The proposed scheme is used for the numerical solution of the steady state ... [more ▼] We present the Implicit Potential (IPOT) numerical scheme developed in the framework of meshless point collocation. The proposed scheme is used for the numerical solution of the steady state, incompressible Navier-Stokes (N-S) equations in their primitive variable (u-v-w-p) formulation. The governing equations are solved in their strong form using either a collocated or a semi-staggered type meshless nodal configuration. The unknown field functions and derivatives are calculated using the Modified Moving Least Squares (MMLS) interpolation method. Both velocity-correction and pressure correction methods applied ensure the incompressibility constraint and mass conservation. The proposed meshless point collocation (MPC) scheme has the following characteristics: (i) it can be applied, in a straightforward manner to: steady, unsteady, internal and external fluid flows in 2D and 3D, (ii) it equally applies to regular an irregular geometries, (iii) a distribution of points is sufficient, no numerical integration in space nor any mesh structure are required, (iv) there is no need for pressure boundary conditions since no pressure constitutive equation is solved, (v) it is quite simple and accurate, (vi) results can be obtained using collocated or semi-staggered nodal distributions, (vii) there is no need to compute the velocity potential nor the unit normal vectors and (viii) there is no need for a curvilinear system of coordinates. Simulations of fluid flow in 2D and 3D for regular and irregular geometries indicate the validity of the proposed methodology. [less ▲] Detailed reference viewed: 139 (2 UL)Trefftz polygonal finite element for linear elasticity: convergence, accuracy, and properties ; ; et al in Asia Pacific Journal on Computational Engineering (2017) In this paper, the accuracy and the convergence properties of Trefftz finite element method over arbitrary polygons are studied. Within this approach, the unknown displacement field within the polygon is ... [more ▼] In this paper, the accuracy and the convergence properties of Trefftz finite element method over arbitrary polygons are studied. Within this approach, the unknown displacement field within the polygon is represented by the homogeneous solution to the governing differential equations, also called as the T-complete set. While on the boundary of the polygon, a conforming displacement field is independently defined to enforce the continuity of the field variables across the element boundary. An optimal number of T-complete functions are chosen based on the number of nodes of the polygon and the degrees of freedom per node. The stiffness matrix is computed by the hybrid formulation with auxiliary displacement frame. Results from the numerical studies presented for a few benchmark problems in the context of linear elasticity show that the proposed method yields highly accurate results with optimal convergence rates. [less ▲] Detailed reference viewed: 103 (1 UL)A linear smoothed higher-order CS-FEM for the analysis of notched laminated composites ; ; et al in Engineering Analysis with Boundary Elements (2017), 85 Higher-order elements with highly accurate solutions are attractive for stress analysis and stress concentration problems. However, the distorted eight-node serendipity quadrilateral element is known to ... [more ▼] Higher-order elements with highly accurate solutions are attractive for stress analysis and stress concentration problems. However, the distorted eight-node serendipity quadrilateral element is known to yield inaccurate re- sults and sub-optimal convergence rate. In this paper, we present a higher order CS-FEM to alleviate the effect of distorted mesh and guarantee the quality of solutions by employing a linear smoothing technique over eight-node quadratic serendipity elements. The modified. strain matrix is computed by the divergence theorem between the nodal shape functions and their derivatives using Taylor’s expansion of the weak form. The proposed method eliminates the need for isoparametric mapping and numerical studies demonstrate that the proposed method is insensitive to mesh distortion. The improved accuracy and superior convergence rates are numerically demon- strated with a few benchmark problems. The analysis of the stress concentration around cutouts also proves that the present method has good performance for the laminated composites. [less ▲] Detailed reference viewed: 130 (1 UL)A linear smoothed quadratic finite element for the analysis of laminated composite Reissner–Mindlin plates ; ; et al in Composite Structures (2017), 180 It is well known that the high-order elements have significantly improved the accuracy of solutions in the traditional finite element analysis, but the performance of high-order elements is restricted by ... [more ▼] It is well known that the high-order elements have significantly improved the accuracy of solutions in the traditional finite element analysis, but the performance of high-order elements is restricted by the shear-locking and distorted meshes for the plate problems. In this paper, a linear smoothed eight-node Reissner-Mindlin plate element (Q8 plate element) based on the first order shear deformation theory is developed for the static and free vibration analysis of laminated composite plates, the computation of the interior derivatives of shape function and isoparametric mapping can be removed. The strain matrices are modified with a linear smoothing technique by using the divergence theorem between the nodal shape functions and their derivatives in Taylor’s expansion. Moreover, the first order Taylor’s expansion is also employed for the construction of stiffness matrix to satisfy the linear strain distribution. Several numerical examples indicate that the novel Q8 plate element has good performance to alleviate the shear-locking phenomenon and improve the quality of the solutions with distorted meshes. [less ▲] Detailed reference viewed: 133 (2 UL)3D-Foot Plantar Pressure Reconstruction based on the IEE Foot Smart Insole Palmirotta, Guendalina ; Bordas, Stéphane ; E-print/Working paper (2017) Within the growing technology nowadays, the study and research in the human foot have also become much more important. Advanced dynamic foot plantar pressure monitoring applications becomes useful in many ... [more ▼] Within the growing technology nowadays, the study and research in the human foot have also become much more important. Advanced dynamic foot plantar pressure monitoring applications becomes useful in many healthcare fields, e.g. podiatric and orthopedic applications, rehabilitation tools, sports and fitness training tools. The new IEE1 High- Dynamic (HD) 8-multicells smart sensor provides a single insole-solution for daily usage in order to acquire information on the plantar load distribution for health prophylaxis in a large range of different shoe configurations in real time. Depending on the tracked features, 4, 8 or more sensing cells may be necessary to pick the relevant pressure information. However a high number of cells implies powerful read-out electronics, which in turn implies power consumption challenges and might lead to customer dissatisfaction similarly to the first generation of Apple Smart watch. Knowledge should be built up on the way to get from limited number of cells as relevant information as with a high-resolution sensor. This could be very challenging, because every human has a different unique pressure map, i.e. more phenomenon concentrating in some foot zone location than other person. For example, trying to determine the size and shape of pressure peaks, might require a cluster of samples, whereas the relatively flat surface of the surrounding plain might require only a few. Sophisticated mathematical models will be used to generate the complete high-resolution pressure distribution (HRPD) on each foot based on spatial interpolation schemes. The paper is organized as follows, in Section I we provide an overview of challenges and opportunities around the reconstruction of the 3D Foot Plantar Pressure (FPP). Then in Section II we underlying background needed to understand the human generic gait and describe the new smart insole designed by IEE. In Section III, we develop and apply the spatial interpolation model (SIM) to our underlying problem. Next we discuss and present in Section IV the estimated pressure map based on 3 different approaches, followed by a comparison and validation of their efficiency, reliability and accuracy. In Section V, we use mathematical optimization methods (MOM), e.g. the Particle Swarm Optimization (PSO), in order to determine the optimal location, as well the number of sensors cells needed on the relevant foot pressure information. Finally, Section VI gives the concluding remarks and future work in this topic. [less ▲] Detailed reference viewed: 35 (0 UL) |
||